88 templates found
Category:
Author:
Sort:

Build your own N8N workflows MCP server

This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!

JimleukBy Jimleuk
87496

Build an MCP server with Google Calendar and custom functions

Learn how to build an MCP Server and Client in n8n with official nodes. > ⚠ Requires n8n version 1.88.0 or higher. In this example, we use Google Calendar and custom functions as two separate MCP Servers, demonstrating how to integrate both native and custom tools. How it works The AI Agent connects to two MCP Servers. Each MCP Trigger (Server) generates a URL exposing its tools. This URL is used by an MCP Client linked to the AI Agent. Whenever you make changes to the tools, there’s no need to modify the MCP Client. It automatically keeps the AI Agent informed on how to use each tool, even if you change them over time. That’s the power of MCP 🙌 Who is this template for Anyone looking to use MCP with their AI Agents. How to set up Instructions are included within the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/

SolomonBy Solomon
69759

Allow users to send a sequence of messages to an AI agent in Telegram

Use Case When creating chatbots that interface through applications such as Telegram and WhatsApp, users can often sends multiple shorter messages in quick succession, in place of a single, longer message. This workflow accounts for this behaviour. What it Does This workflow allows users to send several messages in quick succession, treating them as one coherent conversation instead of separate messages requiring individual responses. How it Works When messages arrive, they are stored in a Supabase PostgreSQL table The system waits briefly to see if additional messages arrive If no new messages arrive within the waiting period, all queued messages are: Combined and processed as a single conversation Responded to with one unified reply Deleted from the queue Setup Create a table in Supabase called messagequeue. It needs to have the following columns: userid (uint8), message (text), and message_id (uint8) Add your Telegram, Supabase, OpenAI, and PostgreSQL credentials Activate the workflow and test by sending multiple messages the Telegram bot in one go Wait ten seconds after which you will receive a single reply to all of your messages How to Modify it to Your Needs Change the value of Wait Amount in the Wait 10 Seconds node in order to to modify the buffering window Add a System Message to the AI Agent to tailor it to your specific use case Replace the OpenAI sub-node to use a different language model

Chris CarrBy Chris Carr
13921

Summarize the new documents from Google Drive and save summary in Google Sheet

This workflow is created by AI developers at WeblineIndia. It streamlines the process of managing content by automatically identifying and fetching the most recently added Google Doc file from your Google Drive. It extracts the content of the document for processing and leverages an AI model to generate a concise and meaningful summary of the extracted text. The summarized content is then stored in a designated Google Sheet, alongside relevant details like the document name and the date it was added, providing an organized and easily accessible reference for future use. This automation simplifies document handling, enhances productivity, and ensures seamless data management. Steps : Fetch the Most Recent Document from Google Drive Action: Use the Google Drive Node. Details: List files, filter by date to fetch the most recently added .doc file, and retrieve its file ID and metadata. Extract Content from the Document Action: Use the Google Docs Node. Details: Set the operation to "Get Content," pass the file ID, and extract the document's text content. Summarize the Document Using an AI Model Action: Use an AI Model Node (e.g., OpenAI, ChatGPT). Details: Provide the extracted text to the AI model, use a prompt to generate a summary, and capture the result. Store the Summarized Content in Google Sheets Action: Use the Google Sheets Node. Details: Append a new row to the target sheet with details such as the original document name, summary, and date added. --- About WeblineIndia WeblineIndia specializes in delivering innovative and custom AI solutions to simplify and automate business processes. If you need any help, please reach out to us.

WeblineIndiaBy WeblineIndia
7257

Backup your workflows to GitHub -- in (subfolders)

Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.

NazmyBy Nazmy
5851

Run weekly inventories on Shopify sales

This workflow is scheduled to run every week, when it gets all your Shopify orders, calculates their sales value, stores the data in Google Sheets, and sends a notification message to a Slack channel.

LorenaBy Lorena
5091

Travel AI agent - AI-powered travel planner

Overview An n8n workflow automating business travel planning via Telegram. Uses AI and APIs to find and book flights/hotels efficiently. Prerequisites Telegram Bot (BotFather) API Keys: OpenAI (transcription), SerpAPI (flights/hotels), DeepSeek (AI processing) n8n Instance with API access Setup Instructions Import Workflow: Upload JSON to n8n. Configure API Credentials: Set up Telegram, OpenAI, SerpAPI, and DeepSeek keys. Webhook Activation: Ensure Telegram webhook is active with HTTPS. Test: Send a Telegram message and verify execution. Workflow Operation User Input Processing Telegram bot triggers workflow, extracts text/audio. OpenAI transcribes voice messages. AI (DeepSeek) extracts key travel details (locations, dates, accommodation needs). Travel Search Flights: Uses SerpAPI for flight options (airlines, times, prices). Hotels: Fetches accommodations with dynamic check-out date. AI Recommendations & Customization DeepSeek generates structured travel plans. Users can modify prompts to adjust AI responses for personalized results. Professional, well-structured responses with links. Response Delivery Sends travel recommendations via Telegram with clear details. Use Cases Ideal for business professionals, executive assistants, frequent travelers, and small businesses. Customization & Troubleshooting Adjust memory handling and API calls. Modify prompts to refine AI output. Ensure API keys are active and network is accessible.

BadrBy Badr
5004

Capture leads in HubSpot from Typeform

This workflow is triggered when a typeform is submitted, then it saves the sender's information into HubSpot as a new contact. Typeform Trigger: triggers the workflow when a typeform is submitted. Set: sets the fields for the values from Typeform. HubSpot 1: creates a new contact with information from Typeform. IF: filters contacts who expressed their interest in business services. HubSpot 2: updates the contact's stage to opportunity. Gmail: sends an email to the opportunity contacts with informational material. NoOp: takes no action for contacts who are not interested.

LorenaBy Lorena
4571

Create voice assistant interface with OpenAI GPT-4o-mini and text-to-speech

Voice Assistant Interface with n8n and OpenAI This workflow creates a voice-activated AI assistant interface that runs directly in your browser. Users can click on a glowing orb to speak with the AI, which responds with voice using OpenAI's text-to-speech capabilities. Who is it for? This template is perfect for: Developers looking to add voice interfaces to their applications Customer service teams wanting to create voice-enabled support systems Content creators building interactive voice experiences Anyone interested in creating their own "Alexa-like" assistant How it works The workflow consists of two main parts: Frontend Interface: A beautiful animated orb that users click to activate voice recording Backend Processing: Receives the audio transcription, processes it through an AI agent with memory, and returns voice responses The system uses: Web Speech API for voice recognition (browser-based) OpenAI GPT-4o-mini for intelligent responses OpenAI Text-to-Speech for voice synthesis Session memory to maintain conversation context Setup requirements n8n instance (self-hosted or cloud) OpenAI API key with access to: GPT-4o-mini model Text-to-Speech API Modern web browser with Web Speech API support (Chrome, Edge, Safari) How to set up Import the workflow into your n8n instance Add your OpenAI credentials to both OpenAI nodes Copy the webhook URL from the "Audio Processing Endpoint" node Edit the "Voice Assistant UI" node and replace YOURWEBHOOKURL_HERE with your webhook URL Access the "Voice Interface Endpoint" webhook URL in your browser Click the orb and start talking! How to customize the workflow Change the AI personality: Edit the system message in the "Process User Query" node Modify the visual style: Customize the CSS in the "Voice Assistant UI" node Add more capabilities: Connect additional tools to the AI Agent Change the voice: Select a different voice in the "Generate Voice Response" node Adjust memory: Modify the context window length in the "Conversation Memory" node Demo Watch the template in action: https://youtu.be/0bMdJcRMnZY

Anderson AdelinoBy Anderson Adelino
4123

Build a WhatsApp assistant with memory, Google Suite & multi-AI research and imaging

The "WhatsApp Productivity Assistant with Memory and AI Imaging" is a comprehensive n8n workflow that transforms your WhatsApp into a powerful, multi-talented AI assistant. It's designed to handle a wide range of tasks by understanding user messages, analyzing images, and connecting to various external tools and services. The assistant can hold natural conversations, remember past interactions using a MongoDB vector store (RAG), and decide which tool is best suited for a user's request. Whether you need to check your schedule, research a topic, get the latest news, create an image, or even analyze a picture you send, this workflow orchestrates it all seamlessly through a single WhatsApp chat interface. The workflow is structured into several interconnected components: WhatsApp Trigger & Incoming Message Processing: This is the entry point, starting when a message (text or image) is received via WhatsApp. A Route Message by Type (Image/Text) node then intelligently routes the message based on its content type. A Typing.... node sends a typing indicator to the user for a better experience. If an image is received, it's downloaded, processed via an HTTP Request, and analyzed by the Analyze image node. The Code1 node then standardizes both text and image analysis output into a single, unified input for the main AI agent. Core AI Agent: This is the brain of the operation. The AI Agent1 node receives the user's input, maintains short-term conversational memory using Simple Memory, and uses a powerful language model (gpt-oss-120b2 or gpt-oss-120b1) to decide which tool or sub-agent to use. It orchestrates all the other agents and tools. Productivity Tools Agent: This group of nodes connects the assistant to your personal productivity suite. It includes sub-agents and tools for managing Google Calendar, Google Tasks, and Gmail, allowing you to schedule events, manage to-dos, and read emails. It leverages a language model (gpt-4.1-mini or gemini-2.5-flash) for understanding and executing commands within these tools. Research Tool Agent: This agent handles all research-related queries. It has access to multiple search tools (Brave Web Search, Brave News Search, Wikipedia, Tavily, and a custom perprlexcia search) to find the most accurate and up-to-date information from the web. It uses a language model (gpt-oss-120b or gpt-4.1-nanoChat Model1) for reasoning. Long-Term Memory Webhook: A dedicated sub-workflow (Webhook2) that processes conversation history, extracts key information using Extract Memory Info, and stores it in a MongoDB Atlas Vector Store for long-term memory. This allows the AI agent to remember past preferences and facts. Image Generation Webhook: A specialized sub-workflow (Webhook3) triggered when a user asks to create an image. It uses a dedicated AI Agent with MongoDB Atlas Vector Store1 for contextual image prompt generation, Clean Prompt Text1 to refine the prompt, an HTTP Request to an external image generation API (e.g., Together.xyz), and then converts and sends the generated image back to the user via WhatsApp. --- Use Cases Personal Assistant: Schedule appointments, create tasks, read recent emails, and manage your daily agenda directly from WhatsApp. Information Retrieval: Ask any factual, news, or research-based question and get real-time answers from various web sources. Creative Content Generation: Request the AI to generate images based on your descriptions for logos, artwork, or social media content. Smart Communication: Engage in natural, contextual conversations with an AI that remembers past interactions. Image Analysis: Send an image and ask the AI to describe its contents or answer questions about it. --- Pre-conditions Before importing and running this template, you will need: Self-hosted n8n Instance: This template requires a self-hosted n8n instance as it uses webhooks that need public accessibility. WhatsApp Business Account: A Meta Developer Account configured for WhatsApp Business Platform API access. MongoDB Atlas Account: A MongoDB Atlas cluster with a database and collection set up for the vector store. Google Cloud Project: Configured with API access for Google Calendar, Google Tasks, and Gmail. API Keys/Accounts for: OpenWeatherMap: For weather forecasts. Groq, OpenRouter, or Vercel AI Gateway: For various Language Models (e.g., gpt-oss-120b, gpt-5-nano, gpt-4o-mini). Mistral Cloud: For embedding models (e.g., codestral-embed-2505). Brave Search: For web and news searches. Tavily API: For structured search results. Together.xyz or similar Image Generation API: For creating images. Perplexity API (or self-hosted instance): For the perprlexcia tool (the current URL http://self hoseted perplexcia/api/search implies a self-hosted or custom endpoint). Publicly Accessible URLs: Your n8n instance and any custom webhook endpoints (like perprlexcia) must be publicly accessible. --- Requirements (n8n Credentials) You will need to set up the following credentials within your n8n instance: WhatsApp OAuth account: For the WhatsApp Trigger node. WhatsApp account: For Send message2, Send message3, Download media, and Typing.... nodes. Google Palm Api account: For Analyze image, Google Gemini Chat Model, gemini-2.5-flash, and Google Gemini Chat Model5 nodes. OpenWeatherMap account: For the Get Weather Forecast node. Groq account: For gpt-oss-120b node. Google Calendar OAuth2Api account: For the Google Calendar tools. MongoDB account: For MongoDB Atlas Vector Store nodes. OpenRouter account: For gpt-5-nano and gpt-4.1-nanoChat Model1 nodes. Gmail account : For Get many messages and Get a message nodes (ensure correct Gmail OAuth2 setup for each). Google Tasks account: For the Google Tasks tools. Bearer Auth account: For HTTP Request5 (used in media download). Brave Search account: For Brave Web Search and Brave News Search nodes. Vercel Ai Gateway Api account: For gpt-4.1-mini, gpt-oss-120b, gpt-oss-120b2, and gpt-4.1-nano nodes. HTTP Header Auth account: For Tavily web search (create a new one named "Tavily API Key" with Authorization: Bearer YOURTAVILYAPI_KEY) and HTTP Request (for Together.xyz, e.g., "Together.xyz API Key"). Mistral Cloud account: For codestral-embed-2505, codestral-embed-, and codestral-embed-2506 nodes.

Iniyavan JCBy Iniyavan JC
3593

Extract business contact leads from Google Maps with RapidAPI and Google Sheets

Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features 🔍 Keyword-Based Google Maps Scraping: Search for any business type in any location using natural language queries 📧 Contact Information Extraction: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) 🚫 Smart Duplicate Prevention: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries 📊 Google Sheets Integration: Seamless data storage with automatic organization and structure 🌍 Multi-Location Support: Process multiple cities, regions, or countries in a single workflow execution ⚡ Rate Limiting & Error Handling: Built-in delays and error handling ensure reliable, uninterrupted execution 💰 Cost Optimization: Intelligent batching and duplicate prevention minimize API usage and costs 📱 Comprehensive Data Collection: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication for Local Business Data API Google Sheets OAuth2 for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings → Credentials → Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOURAPIKEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings → Credentials → Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | countryisocode | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: businessid, name, phonenumber, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOURGOOGLESHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution

Javier HitaBy Javier Hita
3122

Create a document in outline for each new GitLab release

Create a document in Outline for each new GitLab release. Depends on this PR being merged. Copy workflow Set credentials for GitLab and Outline Inside HTTP Request node, set the following: collectionId parentDocumentId (or remove if unwanted) Example result

ManuBy Manu
2931