17 templates found
Category:
Author:
Sort:

Build an MCP server with Airtable

Who is this for? This template is designed for anyone who wants to integrate MCP with their AI Agents using Airtable. Whether you're a developer, a data analyst, or an automation enthusiast, if you're looking to leverage the power of MCP and Airtable in your n8n workflows, this template is for you. What problem is this workflow solving? This template caters to MCP beginners seeking a hands-on example and developers looking to integrate Airtable MCP service. When integrating MCP with Airtable, manually updating AI Agents after changes to Airtable data on the MCP Server is time-consuming and error-prone. This template automates the process, enabling the AI Agent to instantly recognize changes made to Airtable on the MCP Server. In data management, for example, it ensures that record updates or additions in Airtable are automatically detected by the AI Agent. With detailed steps, it simplifies the integration process for all users. What this workflow does This workflow focuses on integrating MCP with Airtable within n8n. Specifically, it allows you to build an MCP Server and Client using Airtable nodes in n8n. Any changes made to the Airtable Base/Table on the MCP Server are automatically recognized by the MCP Client in the workflow. This means that you can make changes to your Airtable (such as adding, deleting, or modifying records) on the MCP Server, and the MCP Client in the n8n workflow will immediately detect these changes without any manual intervention. Setup Requirements An active n8n account. Access to Airtable API. A sample base and rows in Airtable that you can use to test. An API key from your preferred LLM to power the AI agent. Step-by-step guide Create a new workflow in n8n: Log in to your n8n account and create a new workflow. Add Airtable nodes: Search for and add the Airtable nodes to your workflow that you wish the MCP client to have access to. Set up the MCP Server and Client: Use the appropriate nodes in n8n to set up the MCP Server and Client. Connect the Airtable nodes to the MCP nodes as required. Activate and test the workflow: Talk to the chat trigger once all credentials have been updated and table data synced and try adding some rows, deleting or finding and updating cells. How to customize this workflow to your needs If you want to customize this workflow, you can: Modify the triggers: You can change the conditions under which the MCP Client detects changes. For example, you can set it to detect changes only in specific fields or based on certain record values in Airtable. Integrate with other services: You can add more nodes to the workflow to integrate with other services, such as sending notifications to Slack or triggering further actions based on the detected Airtable changes. --- Need help? Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.

Aitor | 1NodeBy Aitor | 1Node
8119

Upload invoices from Gmail to Google Drive

Upload invoices from Gmail to Google Drive daily

JasonBy Jason
4729

Convert boring images to stunning photos and videos

Tutorial Video ✨ Transform ordinary photos into AI masterpieces This powerful workflow creates a complete AI image editing system that your Telegram contacts can use with a simple message. Let users send images with creative instructions and watch as cutting-edge AI transforms their ideas into reality. 🤖 What This Workflow Does Connect your Telegram bot to OpenAI and Replicate's advanced image models to: Edit photos based on text descriptions Generate creative variations of original images Deliver professional-quality results in seconds Handle the entire process automatically 🎨 Perfect For Digital creators seeking quick design iterations Photography enthusiasts wanting AI enhancements Product marketers creating concept visualizations Community managers offering image editing services Anyone looking to explore AI image capabilities 🔧 Easy Setup Just configure your API credentials, customize a few prompts, and your Telegram bot becomes a powerful AI image editor. The workflow handles all the technical complexity including file conversions, API communications, and asynchronous processing. 💡 Expandable Design Built with flexibility in mind, you can easily: Add additional AI models Implement image moderation Create branching paths based on user requests Integrate with other services like cloud storage Elevate your image editing capabilities with this seamless blend of messaging and AI technology!

RishBy Rish
4681

Build an MCP server which answers questions with retrieval augmented generation

Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG) Tutorial Click here to watch the full tutorial on YouTube How it works This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client. AI Agent Template Click here to navigate to the AI Agent n8n workflow which uses this MCP server Warning This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. Activate the n8n flow Run the "RAG Ingestion Pipeline" and upload some PDF documents How to use it Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API. More detailed instructions Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data

Thomas JanssenBy Thomas Janssen
3858

Build a RAG system with automatic citations using Qdrant, Gemini & OpenAI

This workflow implements a Retrieval-Augmented Generation (RAG) system that: Stores vectorized documents in Qdrant, Retrieves relevant content based on user input, Generates AI answers using Google Gemini, Automatically cites the document sources (from Google Drive). --- Workflow Steps Create Qdrant Collection A REST API node creates a new collection in Qdrant with specified vector size (1536) and cosine similarity. Load Files from Google Drive The workflow lists all files in a Google Drive folder, downloads them as plain text, and loops through each. Text Preprocessing & Embedding Documents are split into chunks (500 characters, with 50-character overlap). Embeddings are created using OpenAI embeddings (text-embedding-3-small assumed). Metadata (file name and ID) is attached to each chunk. Store in Qdrant All vectors, along with metadata, are inserted into the Qdrant collection. Chat Input & Retrieval When a chat message is received, the question is embedded and matched against Qdrant. Top 5 relevant document chunks are retrieved. A Gemini model is used to generate the answer based on those sources. Source Aggregation & Response File IDs and names are deduplicated. The AI response is combined with a list of cited documents (filenames). Final output: AI Response Sources: ["Document1", "Document2"] --- Main Advantages End-to-end Automation: From document ingestion to chat response generation, fully automated with no manual steps. Scalable Knowledge Base: Easy to expand by simply adding files to the Google Drive folder. Traceable Responses: Each answer includes its source files, increasing transparency and trustworthiness. Modular Design: Each step (embedding, storage, retrieval, response) is isolated and reusable. Multi-provider AI: Combines OpenAI (for embeddings) and Google Gemini (for chat), optimizing performance and flexibility. Secure & Customizable: Uses API credentials and configurable chunk size, collection name, etc. --- How It Works Document Processing & Vectorization The workflow retrieves documents from a specified Google Drive folder. Each file is downloaded, split into chunks (using a recursive text splitter), and converted into embeddings via OpenAI. The embeddings, along with metadata (file ID and name), are stored in a Qdrant vector database under the collection negozio-emporio-verde. Query Handling & Response Generation When a user submits a chat message, the workflow: Embeds the query using OpenAI. Retrieves the top 5 relevant document chunks from Qdrant. Uses Google Gemini to generate a response based on the retrieved context. Aggregates and deduplicates the source file names from the retrieved chunks. The final output includes both the AI-generated response and a list of source documents (e.g., Sources: ["FAQ.pdf", "Policy.txt"]). --- Set Up Steps Configure Qdrant Collection Replace QDRANTURL and COLLECTION in the "Create collection" HTTP node to initialize the Qdrant collection with: Vector size: 1536 (OpenAI embedding dimension). Distance metric: Cosine. Ensure the "Clear collection" node is configured to reset the collection if needed. Google Drive & OpenAI Integration Link the Google Drive node to the target folder (Test Negozio in this example). Verify OpenAI and Google Gemini API credentials are correctly set in their respective nodes. Metadata & Output Customization Adjust the "Aggregate" and "Response" nodes if additional metadata fields are needed. Modify the "Output" node to format the response (e.g., changing Sources: {{...}} to match your preferred style). Testing Trigger the workflow manually to test document ingestion. Use the chat interface to verify responses include accurate source attribution. Note: Replace placeholder values (e.g., QDRANTURL) with actual endpoints before deployment. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
2661

Business model canvas AI-powered generator (LLM flexible)

👥 Who is this for? Startup founders validating or pitching new ideas Business consultants running strategy sessions Product teams defining business logic visually Agencies offering planning frameworks to clients --- ❓ What problem does this workflow solve? Creating a Business Model Canvas manually is time-consuming and often scattered across tools. This workflow solves that by allowing users to generate a fully populated, formatted, and printable Business Model Canvas in seconds using the power of AI, all structured in a professional A4 landscape layout. --- ⚙️ What this workflow does Starts with a chat input asking for your business idea Sends it to 9 separate AI agents, each focused on one section: Key Partners Key Activities Value Proposition Customer Relationships Customer Segments Key Resources Channels Cost Structure Revenue Streams Uses your preferred LLM (see below) to generate meaningful bullet points Converts output into a specific format Merges all sections into a clean, A4-styled HTML canvas Exports the result as a downloadable .html file --- 🛠️ Setup Import the workflow into your n8n instance Start the flow from the “When chat message received” node Describe your business idea when prompted (e.g., “Online bookshop with rare Persian literature”) Wait for AI processing to complete Visit the last node “HTML code to HTML file” Click Download to get your final canvas in .html format --- 🤖 LLM Flexibility (Choose Your Model) This template supports any AI model with a chat interface: Ollama (self-hosted models like LLaMA, etc.) OpenAI (GPT-4, GPT-3.5) Anything with a compatible node You can easily change the LLM by updating the Language Model Node. No need to modify any other logic or formatting. --- 🧪 How to customize this workflow Change the LLM model from the Ollama node to OpenAI, etc. Modify the final HTML layout in the “Turn to HTML” node Add a PDF export, email delivery, or Google Drive sync Replace the chat trigger with a webform, CRM hook, etc. --- ✅ Requirements A working LLM integration (Ollama or OpenAI recommended) n8n (self-hosted or cloud) --- 📌 Notes Sticky notes included for setup and instructions Each node clearly named by function (e.g. "Customer Segments Generator") Designed for speed, structure, and professional presentation --- 📩 Need help? For setup questions, custom features, or LLM integration support, contact: sinamirshafiee@gmail.com

SinaBy Sina
2012

Get Real-time Stock Analysis and Rankings with Danelfin's AI Analytics API

Danelfin MCP Server About Danelfin Danelfin is an AI-powered stock analytics platform that helps investors find the best stocks and optimize their portfolios with explainable AI insights to make smarter, data-driven investment decisions. The platform uses AI-driven stock analytics to optimize investment strategies. Danelfin uses Explainable Artificial Intelligence to help everyone make smart and data-driven investment decisions. The platform provides API solutions for developers, analysts, and fintech firms wanting to integrate predictive stock data into their own applications. Key Features AI Stock Picker: Advanced algorithms to identify high-potential stocks Portfolio Optimization: Data-driven portfolio management and optimization tools Explainable AI: Transparent AI insights that show reasoning behind recommendations Predictive Analytics: AI Scores and forecasting capabilities Multi-Asset Coverage: Analysis of stocks and ETFs across various markets MCP Server Endpoints This Danelfin MCP (Model Context Protocol) server exposes three main endpoints that provide comprehensive market analysis capabilities: 🏆 /ranking Endpoint GET: https://apirest.dan... The ranking endpoint provides AI-powered stock rankings and ratings. This tool allows users to: Access ranked lists of stocks based on AI-generated scores Retrieve performance rankings across different timeframes Get comparative analysis of stock performance Access predictive AI scores for investment decision-making 🏭 /sectors Endpoint GET: https://apirest.dan... The sectors endpoint provides analysis for US market stocks including AI ratings, prices, stock charts, technical, fundamental, and sentiment analysis organized by market sectors. Features include: Sector-wise market analysis and performance metrics AI ratings and price data for sector classifications Technical and fundamental analysis by sector Sentiment analysis across different market sectors Comparative sector performance insights 🏢 /industries Endpoint GET: https://apirest.dan... The industries endpoint offers granular industry-level analysis within market sectors: Industry-specific stock analysis and ratings Performance metrics at the industry classification level AI-powered insights for specific industry verticals Comparative industry analysis within sectors Industry trend analysis and forecasting Integration Benefits This MCP server enables seamless integration of Danelfin's AI-powered stock analysis capabilities into applications and workflows. Users can: Access Real-time Data: Get up-to-date stock rankings, sector performance, and industry analysis Leverage AI Insights: Utilize explainable AI recommendations for investment decisions Streamline Research: Automate stock research workflows with comprehensive market data Enhance Decision-Making: Make data-driven investment choices with predictive analytics Scale Analysis: Process large-scale market analysis across multiple dimensions Use Cases Investment Research: Comprehensive stock analysis and ranking for portfolio managers Algorithmic Trading: Integration of AI scores into trading algorithms and strategies Financial Advisory: Enhanced client advisory services with AI-powered insights Risk Management: Sector and industry analysis for portfolio risk assessment Market Analysis: Real-time market intelligence for institutional investors The Danelfin MCP server bridges the gap between advanced AI stock analytics and practical investment applications, providing developers and financial professionals with powerful tools for data-driven market analysis.

PaulBy Paul
1280

One-way sync between Pipedrive and HubSpot

This workflow synchronizes data one-way from Pipedrive to HubSpot. Cron node schedules the workflow to run every minute. Pipedrive and Hubspot1 nodes pull in both lists of persons from Pipedrive and contacts from HubSpot. Merge node with the option Remove Key Matches identifies the items that uniquely exist in Pipedrive. Hubspot2 node takes those unique items and adds them to HubSpot.

LorenaBy Lorena
981

Check email via AI agent with Mailcheck Tool MCP Server

Complete MCP server exposing all Mailcheck Tool operations to AI agents. Zero configuration needed - 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mailcheck Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mailcheck Tool tool with full error handling 📋 Available Operations (1 total) Every possible Mailcheck Tool operation is included: 🔧 Email (1 operations) • Check an email 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mailcheck Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mailcheck Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.

David AshbyBy David Ashby
327

Extract timesheet data with Mistral OCR & Gmail human verification

📖 Description 🔹 How it works This workflow introduces an AI + Human-in-the-Loop pipeline for employee timesheet management. It combines the power of Google Drive, AI (OCR + LLM), and Gmail with a human review step to ensure accuracy and compliance. AI-Powered File Discovery Scans a Google Drive folder for new or updated timesheet files (PDF, Word, Excel, Images). AI Data Extraction Uses OCR and LLM (Mistral) to intelligently read and extract structured data. Supports multiple formats: PDF, Word (DOC/DOCX), Excel (XLS/XLSX), and Image files (JPG, PNG, scanned documents). Creates clean JSON with file details and timesheet logs (date, hours worked, tasks, notes). Smart Data Formatting Converts AI output into a clear HTML summary table for easy review. Flags potential anomalies (missing hours, duplicate dates, irregular entries). Human-in-the-Loop Verification Sends an approval email via Gmail containing: File metadata AI-generated HTML summary JSON attachment of raw extracted data HR/Managers review the summary and approve/reject before final actions occur. Post-Approval Automation (optional) Approved records can be saved in a separate Google Drive folder. Employees or HR receive confirmation emails. --- ⚙️ Set up steps Connect Credentials Add Google Drive and Gmail credentials in n8n. Configure Mistral (or any LLM) API credentials. Configure Google Drive In the “Search files and folders” node, replace the folderId with your company’s timesheet folder ID. Customize Extraction Schema Sticky notes explain how JSON output is structured. Adapt it for your organization’s needs (e.g., overtime, project codes). Set Up Human Verification Emails Update Gmail node recipients to your HR or approval team. Customize the email body (AI summary + JSON file attached). Activate & Test Enable the workflow. Upload a sample timesheet to trigger the AI + human verification loop. --- ⚡ Result: A robust AI + Human-in-the-Loop workflow that reduces repetitive data entry, prevents payroll errors, and gives HR full confidence before final approval.

ResilNextBy ResilNext
320

Generate AI-powered lease renewal offers with Ollama LLM, Supabase and Gmail

📄 Automated Lease Renewal Offer by Email ✅ Features Automated Lease Offer Generation using AI (Ollama model). Duplicate File Check to avoid reprocessing the same customer. Personalized Offer Letter creation based on customer details from Supabase. PDF/Text File Conversion for formatted output. Automatic Google Drive Management for storing and retrieving files. Email Sending with generated offer letter attached. Seamless Integration with Supabase, Google Drive, Gmail, and AI LLM. ⚙️ How It Works Trigger: Workflow starts on form submission with customer details. Customer Lookup: Searches Supabase for customer data. Updates customer information if needed. File Search & Duplication Check: Looks for existing lease offer files in Google Drive. If duplicate found, deletes old file before proceeding. AI Lease Offer Creation: Uses the LLM Chain (offerLetter) to generate a customized lease renewal letter. File Conversion: Converts AI-generated text into a downloadable file format. Upload to Drive: Saves the new lease offer in Google Drive. Email Preparation: Uses Basic LLM Chain-email to draft the email body. Downloads the offer file from Drive and attaches it. Email Sending: Sends the renewal offer email via Gmail to the customer. 🛠 Setup Steps Supabase Connection: Add Supabase credentials in n8n. Ensure a customers table exists with relevant columns. 🔜Future Steps Add specific letter template (organization template). PDF offer letter

Lakindu SiriwardanaBy Lakindu Siriwardana
316

Llama AI model for Google Sheets tracking

How it works: Accesses a target website, searches for new PDFs, and downloads them automatically. Extracts content from each PDF and sends it to an AI for summarization. Delivers the AI-generated summary directly to a Discord channel. Marks processed URLs in Google Sheets to avoid duplicates. Set up steps: Configure the website URL in the HTTP Request node. Connect to Google Cloud API (enable Drive & Sheets) and link your spreadsheet. Set up an OpenRouter API key and choose your preferred AI model. Create a Discord webhook for notifications.

Cristian Baño BelchíBy Cristian Baño Belchí
299