24 templates found
Category:
Author:
Sort:

Build an MCP server with Airtable

Who is this for? This template is designed for anyone who wants to integrate MCP with their AI Agents using Airtable. Whether you're a developer, a data analyst, or an automation enthusiast, if you're looking to leverage the power of MCP and Airtable in your n8n workflows, this template is for you. What problem is this workflow solving? This template caters to MCP beginners seeking a hands-on example and developers looking to integrate Airtable MCP service. When integrating MCP with Airtable, manually updating AI Agents after changes to Airtable data on the MCP Server is time-consuming and error-prone. This template automates the process, enabling the AI Agent to instantly recognize changes made to Airtable on the MCP Server. In data management, for example, it ensures that record updates or additions in Airtable are automatically detected by the AI Agent. With detailed steps, it simplifies the integration process for all users. What this workflow does This workflow focuses on integrating MCP with Airtable within n8n. Specifically, it allows you to build an MCP Server and Client using Airtable nodes in n8n. Any changes made to the Airtable Base/Table on the MCP Server are automatically recognized by the MCP Client in the workflow. This means that you can make changes to your Airtable (such as adding, deleting, or modifying records) on the MCP Server, and the MCP Client in the n8n workflow will immediately detect these changes without any manual intervention. Setup Requirements An active n8n account. Access to Airtable API. A sample base and rows in Airtable that you can use to test. An API key from your preferred LLM to power the AI agent. Step-by-step guide Create a new workflow in n8n: Log in to your n8n account and create a new workflow. Add Airtable nodes: Search for and add the Airtable nodes to your workflow that you wish the MCP client to have access to. Set up the MCP Server and Client: Use the appropriate nodes in n8n to set up the MCP Server and Client. Connect the Airtable nodes to the MCP nodes as required. Activate and test the workflow: Talk to the chat trigger once all credentials have been updated and table data synced and try adding some rows, deleting or finding and updating cells. How to customize this workflow to your needs If you want to customize this workflow, you can: Modify the triggers: You can change the conditions under which the MCP Client detects changes. For example, you can set it to detect changes only in specific fields or based on certain record values in Airtable. Integrate with other services: You can add more nodes to the workflow to integrate with other services, such as sending notifications to Slack or triggering further actions based on the detected Airtable changes. --- Need help? Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.

Aitor | 1NodeBy Aitor | 1Node
8119

Receive email updates via IMAP

Companion workflow for IMAP Email node docs

amudhanBy amudhan
3488

Comprehensive SEO Audit with GPT-4 Specialists using Analytics, Search Console & PageSpeed

🤖 Automated SEO Audit with a Team of AI Specialists This workflow performs a comprehensive, automated monthly SEO and performance audit for any website. It uses a "team" of specialized AI agents to analyze data from multiple sources, aggregates their findings, and generates a final strategic report. Every month, it automatically fetches data from Google Analytics, Google Search Console, and Google PageSpeed Insights, and also performs a live crawl of the target website's homepage. Key Features Fully Automated: Runs on a schedule to deliver monthly reports without manual intervention. Multi-Source Analysis: Gathers data from four key marketing sources for a 360° view. AI Agent Team: Uses a sophisticated multi-agent system where each AI specializes in one area (Analytics, Performance, Technical SEO). Master Analyst: A final AI agent synthesizes all specialist reports into a single, actionable strategic plan. Automated Storage: All individual and final reports are automatically saved to a designated Google Sheet. --- ⚙️ Setup Instructions To use this template, you must configure your credentials and set your target website. Set Your Target Domain (Crucial!): Find the Set Target Website node at the beginning of the workflow. In the "Value" field, replace https://www.your-website.com with the URL of the website you want to audit. This will update the URL across the entire workflow automatically. Configure the Schedule Trigger: Click on the Schedule Trigger node to set when you want the monthly report to run. Connect Your Google Credentials: Google Analytics: Select your credential in the Get a report node. Google Search Console: Select your credential in the Search Console (HTTP Request) node. Google Sheets: Select your credential in all* Google Sheets nodes. Google PageSpeed API Key: Go to the "Credentials" tab in n8n and create a new "Generic Credential" with the type "API Key - Query Param". Name it Google API Key. The "Parameter Name" must be key. Paste your PageSpeed API key into the "API Key" field. Go back to the PageSpeed Insight node, select "API Key - Query Param" for Authentication, and choose your new credential. Connect OpenAI Credentials: This template uses multiple OpenAI Chat Model nodes. Configure each one with your OpenAI API key. Set Your Google Sheet: In each of the Google Sheets nodes, replace the hardcoded "Document ID" with the ID of your own Google Sheet where you want to store the reports. --- 🔬 Workflow Explained Phase 1: Data Collection: The Schedule Trigger starts the workflow. Four parallel branches collect data from Google Analytics, PageSpeed Insights, Search Console, and a direct website crawl. Phase 2: Data Processing & Specialist Analysis: Each data source is processed by a dedicated Code node to format the data. The formatted data is then sent to a specialized AI agent (ANALYTICS SPECIALIST, PERFORMANCE SPECIALIST, etc.) for in-depth analysis. Phase 3: Report Aggregation: A Merge node waits for all four specialist reports to be completed. A DATA AGGREGATOR node then combines them into a single, comprehensive package. Phase 4: Master Synthesis & Storage: The final MASTER ANALYST agent receives the aggregated data and produces a high-level strategic summary with actionable recommendations. This final report is then saved to Google Sheets.

Jimmy GayBy Jimmy Gay
2917

Get invoices from Xero

Companion workflow for Xero node docs

amudhanBy amudhan
2834

Load data into Snowflake

This workflow automatically downloads a CSV from the web, and parses it in a format that n8n can access. It then ensures that the data from the CSV is matched to the names of the columns in the database, and inserts this data as new rows in Snowflake. Prerequisites: A CSV with data A Snowflake account and credentials A Snowflake database to upload your CSV to Nodes: A HTTP Request node to download the CSV file A Spreadsheet File node to access the data from the CSV A Set node to ensure the data from the CSV is mapped to the column names of the Snowflake database A Snowflake node to insert these new rows into the database.

n8n TeamBy n8n Team
2675

Release a new version via Telegram bot command

This workflow allows you to release a new version via a Telegram bot command. This workflow can be used in your Continous Delivery pipeline. Telegram Trigger node: This node will trigger the workflow when a message is sent to the bot. If you want to trigger the workflow via a different messaging platform or a service, replace the Telegram Trigger node with the Trigger node of that service. IF node The IF node checks for the incoming command. If the command is not deploy, the IF node will return false, otherwise true. Set node: This node extracts the value of the version from the Telegram message and sets the value. This value is used later in the workflow. GitHub node: This node creates a new version release. It uses the version from the Set node to create the tag. NoOp node: Adding this node is optional.

Harshil AgrawalBy Harshil Agrawal
1958

Financial document extraction from Gmail to Google Sheets

Overview Manual financial reconciliation is tedious and prone to error. This workflow functions as an AI Financial Controller, automatically monitoring your inbox for invoices, receipts, and bills, extracting the data using OCR, and syncing it to Google Sheets for approval. Unlike simple scrapers, this workflow uses a "Guardrail" AI agent to filter out non-financial emails (like newsletters) before they are processed, ensuring only actual transactions are recorded. Who is it for? Finance Teams: To automate the collection of vendor invoices. Freelancers: To track expenses and receipts for tax season. Operations Managers: To monitor budget spend and categorize costs automatically. How it works Ingest: The workflow watches a specific Gmail label (e.g., "INBOX") for new emails. Guardrail: A Gemini-powered AI agent analyzes the email text to determine if it is a valid financial transaction. If not, the workflow stops. Extraction (OCR): If an attachment exists: An AI Agent (GPT-4o) extracts data from the PDF. If no attachment: An AI Agent extracts data directly from the email body. Validation: Code nodes check for missing fields or invalid amounts. Business Logic: The system automatically assigns General Ledger (GL) categories (e.g., "Uber" -> "Travel") and sets approval statuses based on amount thresholds. Sync: Validated data is logged to Google Sheets, and a confirmation email is sent. Errors are logged to a separate error sheet. How to set up Google Sheets: Copy this Google Sheet template to your drive. It contains the necessary tabs (Invoices, Error Logs, Success Metrics). Configure Workflow: Open the node named "Configuration: User Settings". Paste your Google Sheet ID (found in the URL of your new sheet). Enter the Admin Email address where you want to receive error notifications. Credentials: Connect your Gmail account. Connect your Google Sheets account. Connect your OpenAI (for OCR) and Google Gemini/PaLM (for Guardrails) accounts. Requirements n8n version 1.0 or higher. Gmail account. OpenAI API Key. Google Gemini (PaLM) API Key.

GtarasBy Gtaras
1396

Namesilo bulk domain availability checker

Introduction The namesilo Bulk Domain Availability workflow is a powerful automation solution designed to check the registration status of multiple domains simultaneously using the Namesilo API. This workflow efficiently processes large lists of domains by splitting them into manageable batches, adhering to API rate limits, and compiling the results into a convenient Excel spreadsheet. It eliminates the tedious process of manually checking domains one by one, saving significant time for domain investors, web developers, and digital marketers. The workflow is particularly valuable during brainstorming sessions for new projects, when conducting domain portfolio audits, or when preparing domain acquisition strategies. By automating the domain availability check process, users can quickly identify available domains for registration without the hassle of navigating through multiple web interfaces. Who is this for? This workflow is ideal for: Domain investors and flippers who need to check multiple domains quickly Web developers and agencies evaluating domain options for client projects Digital marketers researching domain availability for campaigns Business owners exploring domain options for new ventures IT professionals managing domain portfolios Users should have basic familiarity with n8n workflow concepts and a Namesilo account to obtain an API key. No coding knowledge is required, though understanding of domain name systems would be beneficial. What problem is this workflow solving? Checking domain availability one-by-one is a time-consuming and tedious process, especially when dealing with dozens or hundreds of potential domains. This workflow solves several key challenges: Manual Inefficiency: Eliminates the need to individually search for each domain through registrar websites. Rate Limiting: Handles API rate limits automatically with built-in waiting periods. Data Organization: Compiles availability results into a structured Excel file rather than scattered notes or multiple browser tabs. Bulk Processing: Processes up to 200 domains per batch, with the ability to handle unlimited domains across multiple batches. Time Management: Frees up valuable time that would otherwise be spent on repetitive manual checks. What this workflow does Overview The workflow takes a list of domains, processes them in batches of up to 200 domains per request (to comply with API limitations), checks their availability using the Namesilo API, and compiles the results into an Excel spreadsheet showing which domains are available for registration and which are already taken. Process Input Setup: The workflow begins with a manual trigger and uses the "Set Data" node to collect the list of domains to check and your Namesilo API key. Domain Processing: The "Convert & Split Domains" node transforms the input list into batches of up to 200 domains to comply with API limitations. Batch Processing: The workflow loops through each batch of domains. API Integration: For each batch, the "Namesilo Requests" node sends a request to the Namesilo API to check domain availability. Data Parsing: The "Parse Data" node processes the API response, extracting information about which domains are available and which are taken. Rate Limit Management: A 5-minute wait period is enforced between batches to respect Namesilo's API rate limits. Data Compilation: The "Merge Results" node combines all the availability data. Output Generation: Finally, the "Convert to Excel" node creates an Excel file with two columns: Domain and Availability (showing "Available" or "Unavailable" for each domain). Setup Import the workflow: Download the workflow JSON file and import it into your n8n instance. Get Namesilo API key: Create a free account at Namesilo and obtain your API key from https://www.namesilo.com/account/api-manager Configure the workflow: Open the "Set Data" node Enter your Namesilo API key in the "Namesilo API Key" field Enter your list of domains (one per line) in the "Domains" field Save and activate: Save the workflow and run it using the manual trigger. How to customize this workflow to your needs Modify domain input format: You can adjust the code in the "Convert & Split Domains" node if your domain list comes in a different format. Change batch size: If needed, you can modify the batch size (currently set to 200) in the "Convert & Split Domains" node to accommodate different API limitations. Adjust wait time: If you have a premium API account with different rate limits, you can modify the wait time in the "Wait" node. Enhance output format: Customize the "Convert to Excel" node to add additional columns or formatting to the output file. Add domain filtering: You could add a node before the API request to filter domains based on specific criteria (length, keywords, TLDs). Integrate with other services: Connect this workflow to domain registrars to automatically register available domains that meet your criteria.

n8n custom workflowsBy n8n custom workflows
1203

AI-powered news monitoring & social post generator with OpenAI and Upload-Post

This automation template turns any RSS feed into ready-to-publish social content using AI. It continuously ingests articles, scores their quality and relevance, crafts platform-native posts (Twitter/X threads and LinkedIn posts), routes items for review or archiving, logs everything to Google Sheets, and can publish automatically to X, Threads, and LinkedIn. Note: This workflow uses OpenAI models for analysis and content generation and integrates with Upload-Post for multi-platform publishing and Google Sheets for tracking. Costs depend on token usage and posting volume. Who Is This For? Content Teams & Solo Creators: Ship consistent, high-signal posts without manual rewriting. Newsletters & Media Brands: Turn breaking stories into shareable, platform-native content. Agencies: Scale curation across clients with review and auto-publish paths. Founders & PMMs: Maintain a steady public presence with minimal effort. What Problem Does This Workflow Solve? Manual curation and rewriting of news is slow and inconsistent. This workflow: Scores Articles: Filters noise with AI quality/relevance scoring. Auto-Writes Posts: Generates concise X threads and business-ready LinkedIn copy. Routes Intelligently: Sends good items to publish/review and archives the rest. Logs Everything: Keeps a structured history in Google Sheets for analytics. How It Works RSS Polling: Monitors your chosen feed(s) on a schedule. Scoring AI: Rates quality and relevance; extracts summary, key topics, and angle. Parse & Enrich: Normalizes AI output and merges with article metadata. Quality Gates: Directs items to “publish/review” or “archive.” Content Generation: Produces an X thread and a LinkedIn post with clear hooks and insights. Publishing: Uploads to X, Threads, and LinkedIn via Upload-Post (optional). Sheets Logging: Writes summaries, scores, and outputs to Google Sheets. Setup OpenAI API: Add your OpenAI credentials (models like gpt-4.1/gpt-4o). Upload-Post Credentials: Connect the Upload-Post integration and target pages (e.g., LinkedIn org ID). Google Sheets: Add OAuth credentials and point “Store Content”/“New for Review”/“Archive” to your sheets. RSS Feed URL: Replace the sample feed with your preferred sources. Thresholds & Routing: Adjust quality/relevance filters to your standards. Publishing Mode: Toggle platforms (X, Threads, LinkedIn) and decide auto vs. review-first. Requirements Accounts: n8n, OpenAI, Upload-Post, Google account (Sheets). API Keys: OpenAI token, Upload-Post credentials, Google Sheets OAuth. Feeds: One or more RSS URLs for your niche. Features AI Triage: Quality/relevance scoring to prioritize high-value stories. Platform-Native Output: Hooked X threads and professional LinkedIn posts. Review or Auto-Publish: Safe gating before posting live. Analytics-Ready Logs: Structured entries in Google Sheets. Modular & Extensible: Swap feeds, add Slack/Discord alerts, or plug into CMS/Notion. Stay top-of-mind: convert fresh news into compelling, on-brand social content—automatically.

Juan Carlos Cavero GraciaBy Juan Carlos Cavero Gracia
1190

Airline web check-in data extraction with Ollama AI, Google Sheets & Postgres Vector DB

Overview This workflow retrieves airline web check-in URLs from Google Sheets, scrapes their content, employs an LLM to generate structured JSON data, refreshes the sheet, creates embeddings, and saves them in a Postgres vector DB for future semantic searches or question-answering. Quick Notes Verify that Google Sheets has accurate URLs for scraping. Ensure the Postgres vector DB is set up correctly for embedding storage. Process Flow Start the workflow with the Chat Trigger - Start node. Retrieve airline check-in URLs using the Fetch Airline URLs node. Scrape webpage data with the Scrape Airline Webpage node. Extract JSON data using the Extract info with LLM node with a Chat Model. Pause for a response with the Wait for Response node. Update Google Sheets with the Store Extracted Data node. Create embeddings with the Generate Embeddings node and store in Postgres vector DB with the Save to Vector DB node. Break down long text with the Split Long Text node and delay the next batch with the Wait Before Next Batch node. Getting Started Import the workflow into n8n and set up Google Sheets and Postgres vector DB credentials. Run a test with a sample URL to confirm scraping and embedding storage. Tailored Adjustments Tweak the Extract info with LLM node to adjust JSON output or modify the Fetch Airline URLs node to pull from different sheet fields.

Oneclick AI SquadBy Oneclick AI Squad
1121

From sitemap crawling to vector storage: Creating an efficient workflow for RAG

This template crawls a website from its sitemap, deduplicates URLs in Supabase, scrapes pages with Crawl4AI, cleans and validates the text, then stores content + metadata in a Supabase vector store using OpenAI embeddings. It’s a reliable, repeatable pipeline for building searchable knowledge bases, SEO research corpora, and RAG datasets. ⸻ Good to know • Built-in de-duplication via a scrape_queue table (status: pending/completed/error). • Resilient flow: waits, retries, and marks failed tasks. • Costs depend on Crawl4AI usage and OpenAI embeddings. • Replace any placeholders (API keys, tokens, URLs) before running. • Respect website robots/ToS and applicable data laws when scraping. How it works Sitemap fetch & parse — Load sitemap.xml, extract all URLs. De-dupe — Normalize URLs, check Supabase scrape_queue; insert only new ones. Scrape — Send URLs to Crawl4AI; poll task status until completed. Clean & score — Remove boilerplate/markup, detect content type, compute quality metrics, extract metadata (title, domain, language, length). Chunk & embed — Split text, create OpenAI embeddings. Store — Upsert into Supabase vector store (documents) with metadata; update job status. Requirements • Supabase (Postgres + Vector extension enabled) • Crawl4AI API key (or header auth) • OpenAI API key (for embeddings) • n8n credentials set for HTTP, Postgres/Supabase How to use Configure credentials (Supabase/Postgres, Crawl4AI, OpenAI). (Optional) Run the provided SQL to create scrape_queue and documents. Set your sitemap URL in the HTTP Request node. Execute the workflow (manual trigger) and monitor Supabase statuses. Query your documents table or vector store from your app/RAG stack. Potential Use Cases This automation is ideal for: Market research teams collecting competitive data Content creators monitoring web trends SEO specialists tracking website content updates Analysts gathering structured data for insights Anyone needing reliable, structured web content for analysis Need help customizing? Contact me for consulting and support: LinkedIn

Mariela SlavenovaBy Mariela Slavenova
883

Send structured logs to BetterStack from any workflow using HTTP request

Send structured logs to BetterStack from any workflow using HTTP Request Who is this for? This workflow is perfect for automation builders, developers, and DevOps teams using n8n who want to send structured log messages to BetterStack Logs. Whether you're monitoring mission-critical workflows or simply want centralized visibility into process execution, this reusable log template makes integration easy. What problem is this workflow solving? Logging failures or events across multiple workflows typically requires duplicated logic. This workflow solves that by acting as a shared log sender, letting you forward consistent log entries from any other workflow using the Execute Workflow node. What this workflow does Accepts level (e.g., "info", "warn", "error") and message fields via Execute Workflow Trigger Sends the structured log to your BetterStack ingestion endpoint via HTTP Request Uses HTTP Header Auth for secure delivery Includes a manual trigger for testing and a sample call to demonstrate usage Comes with clear sticky notes to help you get started Setup Copy your BetterStack Logs ingestion URL. Create a Header Auth credential in n8n with your Authorization: Bearer YOURAPIKEY. Replace the URL in the HTTP Request node with your BetterStack endpoint. Optionally modify the test data or log levels for custom scenarios. Use Execute Workflow in any of your workflows to send logs here.

AudunBy Audun
865