Actioning your meeting next steps using transcripts and AI
This n8n workflow demonstrates how you can summarise and automate post-meeting actions from video transcripts fed into an AI Agent. Save time between meetings by allowing AI handle the chores of organising follow-up meetings and invites. How it works This workflow scans for the calendar for client or team meetings which were held online. Attempts will be made to fetch any recorded transcripts which are then sent to the AI agent. The AI agent summarises and identifies if any follow-on meetings are required. If found, the Agent will use its Calendar Tool to to create the event for the time, date and place for the next meeting as well as add known attendees. Requirements Google Calendar and the ability to fetch Meeting Transcripts (There is a special OAuth permission for this action!) OpenAI account for access to the LLM. Customising the workflow This example only books follow-on meetings but could be extended to generate reports or send emails.
Generate AI-Powered LinkedIn Posts with Google Gemini and Gen-Imager
🚀 AI-Powered LinkedIn Post Automation --- 🧩 How It Works This workflow automatically generates LinkedIn posts based on a user-submitted topic, including both content creation and image generation, then publishes the post to LinkedIn. Ideal for marketers, content creators, or businesses looking to streamline their social media activity, without the need for manual post creation. High-Level Workflow: Trigger: The workflow is triggered when a user submits a form with a topic for the LinkedIn post. Data Mapping: The topic is mapped and prepared for the AI model. AI Content Generation: Calls the Google Gemini AI model to generate engaging post content and a visual image prompt. Image Creation: Sends the image prompt to the external API, gen-imager, to generate a professional image matching the topic. Post Creation: Publishes the text and image to LinkedIn, automatically updating the user's feed. --- ⚙️ Set Up Steps (Quick Overview) 🕐 Estimated Setup Time: ~10–20 minutes Connect Google Gemini: Set up your Google Gemini API credentials to interact with the AI model for content creation. Set Up External Image API: Configure the external image generation API (gen-imager API) for visual creation based on the post prompt. Connect LinkedIn: Set up OAuth2 credentials to authenticate your LinkedIn account and allow publishing posts. Form Submission Setup: Create a simple web form for users to submit the topic for LinkedIn posts. Activate the Workflow: Once everything is connected, activate the workflow. It will trigger automatically upon receiving form submissions. 💡 Important Notes: The flow uses Google Gemini (PaLM) for generating content based on the user's topic. Text to Image: The image generation process involves creating a professional, LinkedIn-appropriate image based on the post’s topic using the gen-imager API. You can customize the visual elements of the posts and adjust the tone of the generated content based on preferences. --- 🛠 Detailed Node Breakdown: On Form Submission Trigger: Captures the user-submitted topic and initializes the workflow. Action: Start the process by gathering the topic information. Mapper (Field Mapping) Action: Maps the captured topic to a variable that is passed along for content generation. AI Agent (Content Generation) Action: Calls Google Gemini to generate professional LinkedIn post content and an image prompt based on the submitted topic. Key: Outputs content in a structured form — post text and image prompt. Google Gemini Chat Model Action: AI model that generates actionable insights, engaging copy, and an image prompt for LinkedIn post. Normalizer (Data Cleanup) Action: Cleans the output from the AI model to ensure the content and image prompt are correctly formatted for use in the next steps. Text to Image (Image Generation) Action: Sends the image prompt to the gen-imager API, which returns a custom image based on the post's topic. Decoder (Base64 Decoding) Action: Decodes the image from base64 format for easier uploading to LinkedIn. LinkedIn (Post Creation) Action: Publishes the generated text and image to LinkedIn, automatically creating a polished post for the user’s feed. --- ⏱ Execution Time Breakdown: Total Estimated Execution Time: ~15–40 seconds per workflow run. On Form Submission: Instant (Trigger) Mapper (Field Mapping): ~1–2 seconds AI Content Generation: ~5–10 seconds (depending on server load) Text to Image: ~5–15 seconds (depends on external API) LinkedIn Post Creation: ~2–5 seconds --- 🚀 Ready to Get Started? Let’s get you started with automating your LinkedIn posts! Create your free n8n account and set up the workflow using this link. --- 📝 Notes & Customizations Form Fields: Customize the form to gather more specific information for the LinkedIn posts (like audience targeting, post category, etc.). Image API Customization: Adjust the image generation prompt to fit your brand’s style, or change the color palette as needed. Content Tone: The tone can be adjusted by modifying the system message sent to Google Gemini for content generation. ---
Transcribing bank statements to markdown using Gemini Vision AI
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts. Multimodal Parsing is better than traditiona OCR because: It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM. It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion. It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire! How it works You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals. Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements. Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node. Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed. Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input. Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page. Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items. Requirements Google Gemini API for Multimodal LLM. Google Drive access for document storage. Stirling PDF instance for PDF to Image conversion Customising the workflow At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude. If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps. Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
MCP Supabase server for AI agent with RAG & multi-tenant CRUD
Supabase AI Agent with RAG & Multi-Tenant CRUD Version: 1.0.0 n8n Version: 1.88.0+ Author: Koresolucoes License: MIT --- Description A stateful AI agent workflow powered by Supabase and Retrieval-Augmented Generation (RAG). Enables persistent memory, dynamic CRUD operations, and multi-tenant data isolation for AI-driven applications like customer support, task orchestration, and knowledge management. Key Features: 🧠 RAG Integration: Leverages OpenAI embeddings and Supabase vector search for context-aware responses. 🗃️ Full CRUD: Manage agentmessages, agenttasks, agentstatus, and agentknowledge in real time. 📤 Multi-Tenant Ready: Supports per-user/organization data isolation via dynamic table names and webhooks. 🔒 Secure: Role-based access control via Supabase Row Level Security (RLS). --- Use Cases Customer Support Chatbots: Persist conversation history and resolve queries using institutional knowledge. Automated Task Management: Track and update task statuses dynamically. Knowledge Repositories: Store and retrieve domain-specific information for AI agents. --- Instructions Import Template Go to n8n > Templates > Import from File and upload this workflow. Configure Credentials Add your Supabase and OpenAI API keys under Settings > Credentials. Set Up Multi-Tenancy (Optional) Dynamic Webhook Path: Replace the default webhook path with /mcp/tool/supabase/:userId to enable per-user routing. Table Names: Use a Set Node to dynamically generate table names (e.g., agentmessages{{userId}}). Activate & Test Enable the workflow and send test requests to the webhook URL. --- Tags AI Agent RAG Supabase CRUD Multi-Tenant OpenAI Automation --- Screenshots --- License This template is licensed under the MIT License.
Get comments from Facebook page
This workflow automates the collection of comments from posts on a Facebook Page. Providing clean, structured data for analysis or further automation. What this workflow does Fetches recent posts from a Facebook Page. Retrieves comments for each post. Outputs structured data of Comments and Posts for further use. Setup Facebook Graph API: Connect your Access Token with the required permissions (pagesreadengagement, pagesreaduser_content). Workflow: Set the Page ID and the number of posts to fetch in the "Set Number of Latest Posts to Fetch" node.
Send new YouTube channel videos to Telegram
A simple node to send new YouTube videos from a channel to a Telegram chat (private, group or channel). CheckTime: set how often videos should be fetched from YouTube. Default is 30 minutes. GetVideosYT: this node will fetch the list of videos from a given channel. Here you need to specify on "Limit" the number of videos to fetch, and on "Channel ID" the ID of the desired channel (it should be the ending part of the URL). You need Google OAuth2 credentials to make it work. A guide is available here. (If you use n8n's tunneling, you may need to adjust the OAuth callback URL on Google Cloud Platform) Set: this node will set some variables to work easily with the next nodes. You shouldn't edit this. Function: this node checks if the video was seen previously by the workflow, so that it won't be published a second time on Telegram. You shouldn't edit this. SendVideo: this node sends the message to Telegram. You need to set your bot's credentials (guide here), specify the Chat ID to send the message (how to get) and personalize the Text of your message. This workflow works correctly only when it's activated. If you manually execute the workflow, it will send every time the latest videos.
Create custom PDF documents from templates with Gemini & Google Drive
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 1 — What Does It Do / Which Problem Does It Solve? This workflow turns Google Docs-based contract & form templates into ready-to-sign PDFs in minutes—all from a single chat flow. Automates repetitive document creation. Instead of copying a rental, sales, or NDA template and filling it by hand every time, the bot asks for the required values and fills them in. Eliminates human error. It lists every mandatory field so nothing is missed, and removes unnecessary clauses via conditional blocks. Speeds up approvals. The final draft arrives as a direct PDF link—one click to send for signing. One template → unlimited variations. Every new template you drop in Drive is auto-listed with zero workflow edits—it scales effortlessly. 100 % no-code. Runs on n8n + Google Apps Script—no extra backend, self-hosted or cloud. --- 2 — How It Works (Detailed Flow) 📝 Template Discovery 📂 The TemplateList node scans the Drive folder you specify via the ?mode=meta endpoint and returns an id / title / desc list. The bot shows this list in chat. 🎯 Selection & Metadata Fetch The user types a template name. 🔍 GetMetaData opens the chosen Doc, extracts META_JSON, placeholders, and conditional blocks, then lists mandatory & optional fields. 🗣 Data-Collection Loop The bot asks for every placeholder value. For each conditional block it asks 🟢 Yes / 🔴 No. Answers are accumulated in a data JSON object. ✅ Final Confirmation The bot summarizes the inputs → when the user clicks Confirm, the DocProcess sub-workflow starts. ⚙️ DocProcess Sub-Workflow | 🔧 Step | Node | Task | | --- | --- | --- | | 1 | User Choice Match Check | Verifies name–ID match; throws if wrong | | 2 | GetMetaData (renew) | Gets the latest placeholder list | | 3 | Validate JSON Format | Checks for missing / unknown fields | | 4 | CopyTemplate | Copies the Doc via Drive API | | 5 | FillDocument | Apps Script fills placeholders & removes blocks | | 6 | Generate PDF Link | Builds an export?format=pdf URL | 📎 Delivery The master agent sends 🔗 Download PDF & ✏️ Open Google Doc links. 🚫 Error Paths status:"ERROR", missing:[…] → bot lists missing fields and re-asks. unknown:[…] → template list is outdated; rerun TemplateList. Any Apps Script error → the returned message is shown verbatim in chat. --- 3 — 🚀 Setup Steps (Full Checklist) > Goal: Get a flawless PDF on the first run. > > > Mentally tick the ☑️ in front of every line as you go. > ☁️ A. Google Drive Preparation | Step | Do This | Watch Out For | | --- | --- | --- | | 1 | Create a Templates/ folder → put every template Doc inside | Exactly one folder; no sub-folders | | 2 | Placeholders in every Doc are {{UPPER_CASE}} | No Turkish chars or spaces | | 3 | Wrap optional clauses with [[BLOCKNAME:START]]…[[BLOCKNAME:END]] | The START tag must have a blank line above | | 4 | Add a META_JSON block at the very end | Script deletes it automatically after fill | | 5 | Right-click Doc > Details ▸ Description = 1-line human description | Shown by the bot in the list | | 6 | Create a second Generated/ folder (for copies) | Keeps Drive tidy | > 🔑 Folder ID (long alphanumerical) = <TEMPLATEPARENTID> > > > We’ll paste this into the TemplateList node next. > Simple sample template → Template Link --- 🛠 B. Import the Workflow into n8n bash Settings ▸ Import Workflow ▸ DocAgent.json If nodes look Broken afterwards → no community-node problem; you only need to select credentials. --- 📑 C. Customize the TemplateList Node Open Template List node ⚙️ → replace '%3CYOURPARENTID%3E' in parents with the real folder ID in the URL. Right-click node > Execute Node. Copy the entire JSON response. In the editor paste it into: DocAgent → System Prompt (top) User Choice Match Check → System Prompt (top) Save. > ⚠️ Why manual? Caching the list saves LLM tokens. Whenever you add a template, rerun the node and update the prompts. > --- 🔗 D. Deploy the Apps Script | Step | Screen | Note | | --- | --- | --- | | 1 | Open Gist files GetMetaData.gs + FillDocument.gs → File ▸ Make a copy | Both files may live in one project | | 2 | Project Settings > enable Google Docs API ✔️ & Google Drive API ✔️ | Otherwise you’ll see 403 errors | | 3 | Deploy ▸ New deployment ▸ Web app | | | • Execute as | Me | | | • Who has access | Anyone | | | 4 | On the consent screen allow scopes:• …/auth/documents• …/auth/drive | Click Advanced › Go if Google warns | | 5 | Copy the Web App URL (e.g. https://script.google.com/macros/s/ABC123/exec) | If this URL changes, update n8n | Apps Script source code → Notion Link --- 🔧 E. Wire the Script URL in n8n | Node | Field | Action | | --- | --- | --- | | GetMetaData | URL | <WEBAPPURL>?mode=meta&id={{ $json["id"] }} | | FillDocument | URL | <WEBAPPURL> | > 💡 Prefer using an .env file? Add GASWEBAPPURL=… and reference it as {{ $env.GASWEBAPPURL }}. > --- 🔐 F. Add Credentials Google Drive OAuth2 → Drive API (v3) Full Access Google Docs OAuth2 → same account LLM key (OpenAI / Gemini) (Optional) Postgres Chat Memory credential for the corresponding node --- 🧪 G. First Run (Smoke Test) Switch the workflow Active. In the chat panel type /start. Bot lists templates → pick one. Fill mandatory fields, optionally toggle blocks → Confirm. 🔗 Download PDF link appears → ☑️ setup complete. --- ❌ H. Common Errors & Fixes | 🆘 Error | Likely Cause | Remedy | | --- | --- | --- | | 403: Apps Script permission denied | Web app access set to User | Redeploy as Anyone, re-authorize scopes | | placeholder validation failed | Missing required field | Provide the listed values → rerun DocProcess | | unknown placeholders: … | Template vs. agent mismatch | Check placeholder spelling (UPPER_CASE ASCII) | | Template ID not found | Prompt list is old | Rerun TemplateList → update both prompts | | Cannot find METAJSON | No meta block / wrong tag | Add [[METAJSONSTART]] … [[METAJSON_END]], retry | --- ✅ Final Checklist [ ] Drive folder structure & template rules ready [ ] Workflow imported, folder ID set in node [ ] TemplateList output pasted into both prompts [ ] Apps Script deployed, URL set in nodes [ ] OAuth credentials & LLM key configured [ ] /start test passes, PDF link received --- 🙋♂️ Need Help with Customizations? Reach out for consulting & support on LinkedIn: Özgür Karateke Full Documentation → Notion Simple sample template → Template Link Apps Script source code → Notion Link
Automate restaurant marketing & booking with Excel, VAPI voice agent & calendar
This n8n template demonstrates how to create a comprehensive marketing automation and booking system that combines Excel-based lead management with voice-powered customer interactions. The system utilizes VAPI for voice communication and Excel/Google Sheets for data management, making it ideal for restaurants seeking to automate marketing campaigns and streamline booking processes through intelligent voice AI technology. Good to know Voice processing requires active VAPI subscription with per-minute billing Excel operations are handled in real-time with immediate data synchronization The system can handle multiple simultaneous voice calls and lead processing All customer data is stored securely in Excel with proper formatting and validation Marketing campaigns can be scheduled and automated based on lead data How it works Lead Management & Marketing Automation Workflow New Lead Trigger: Excel triggers capture new leads when customers are added to the lead management spreadsheet Lead Preparation: The system processes and formats lead data, extracting relevant details (name, phone, preferences, booking history) Campaign Loop: Automated loop processes through multiple leads for batch marketing campaigns Voice Marketing Call: VAPI initiates personalized voice calls to leads with tailored restaurant offers and booking invitations Response Tracking: All call results and lead responses are logged back to Excel for campaign analysis Booking & Order Processing Workflow Voice Response Capture: VAPI webhook triggers when customers respond to marketing calls or make direct booking requests Response Storage: Customer responses and booking preferences are immediately saved to Excel sheets Information Extraction: System processes natural language responses to extract booking details (party size, preferred times, special requests) Calendar Integration: Booking information is automatically scheduled in restaurant management systems Confirmation Loop: Automated follow-up voice messages confirm bookings and provide additional restaurant information Excel Sheet Structure Lead Management Sheet | Column | Description | |--------|-------------| | lead_id | Unique identifier for each lead | | customer_name | Customer's full name | | phone_number | Primary contact number | | email | Customer email address | | lastvisitdate | Date of last restaurant visit | | preferred_cuisine | Customer's food preferences | | partysizetypical | Usual number of guests | | preferredtimeslot | Preferred dining times | | marketing_consent | Permission for marketing calls | | lead_source | How customer was acquired | | lead_status | Current status (new, contacted, converted, inactive) | | lastcontactdate | Date of last marketing contact | | notes | Additional customer information | | created_at | Lead creation timestamp | Booking Responses Sheet | Column | Description | |--------|-------------| | response_id | Unique response identifier | | customer_name | Customer's name from call | | phone_number | Contact number used for call | | booking_requested | Whether customer wants to book | | party_size | Number of guests requested | | preferred_date | Requested booking date | | preferred_time | Requested time slot | | special_requests | Dietary restrictions or special occasions | | call_duration | Length of VAPI call | | call_outcome | Result of marketing call | | followupneeded | Whether additional contact is required | | booking_confirmed | Final booking confirmation status | | created_at | Response timestamp | Campaign Tracking Sheet | Column | Description | |--------|-------------| | campaign_id | Unique campaign identifier | | campaign_name | Descriptive campaign title | | target_audience | Lead segments targeted | | total_leads | Number of leads contacted | | successful_calls | Calls that connected | | bookings_generated | Number of bookings from campaign | | conversion_rate | Percentage of leads converted | | campaign_cost | Total VAPI usage cost | | roi | Return on investment | | start_date | Campaign launch date | | end_date | Campaign completion date | | status | Campaign status (active, completed, paused) | How to use Setup: Import the workflow into your n8n instance and configure VAPI credentials Excel Configuration: Set up Excel/Google Sheets with the required sheet structure provided above Lead Import: Populate the Lead Management sheet with customer data from various sources Campaign Setup: Configure marketing message templates in VAPI nodes to match your restaurant's branding Testing: Test voice commands such as "I'd like to book a table for tonight" or "What are your specials?" Automation: Enable triggers to automatically process new leads and schedule marketing campaigns Monitoring: Track campaign performance through the Campaign Tracking sheet and adjust strategies accordingly The system can handle multiple concurrent voice calls and scales with your restaurant's marketing needs. Requirements VAPI account for voice processing and natural language understanding Excel/Google Sheets for storing lead, booking, and campaign data n8n instance with Excel/Sheets and VAPI integrations enabled Valid phone numbers for lead contact and compliance with local calling regulations Customising this workflow Multi-location Support: Adapt voice AI automation for restaurant chains with location-specific offers Seasonal Campaigns: Try popular use-cases such as holiday promotions, special event marketing, or loyalty program outreach Integration Options: The workflow can be extended to include CRM integration, SMS follow-ups, and social media campaign coordination Advanced Analytics: Add nodes for detailed campaign performance analysis and customer segmentation
Split in batches node noItemsLeft example
This workflow demonstrates how to use noItemsLeft to check if there are items left to be processed by the SplitInBatches node. Function node: This node generates mock data for the workflow. Replace it with the node whose data you want to split into batches. SplitInBatches node: This node splits the data with the batch size equal to 1. Based on your use-case, set the value of the Batch Size. IF node: This node check if all the data by the SplitInBatches are not processed or not. It uses the expression {{$node["SplitInBatches"].context["noItemsLeft"]}} which returns a boolean value. If there is data yet to be processed, the expression will return false, otherwise true. Set node: This node prints a message No Items Left. Based on your use-case, connect the false output of the IF node to the input of the node you want to execute, after the data is processed by the SplitInBatches node.
🤖 Instagram MCP AI agent – read, reply & manage comments with GPT-4o
🤖 Instagram AI Agent with MCP Server – Built for Smart Engagement and Automation Hi! I’m Amanda 🥰 I build intelligent automations with n8n and Make. This powerful workflow was designed to help teams automatically handle Instagram interactions with AI. Using Meta Graph API, LangChain, MCP Server, and GPT-4o, it allows your AI agent to search for posts, read captions, fetch comments, and even reply or message followers, all through structured tools. --- 🔧 What the workflow does Searches for recent media using Instagram ID and access token Reads and extracts captions or media URLs Fetches comments and specific replies from each post Replies to comments automatically with GPT-generated responses Sends direct messages to followers who commented Maps user input and session to keep memory context via LangChain Communicates via Server-Sent Events (SSE) using your MCP Server URL --- 🧰 Nodes & Tech Used LangChain Agent + Chat Model with GPT-4o Memory Buffer for session memory toolHttpRequest to search media, comments, and send replies MCP Trigger and MCP Tool (custom SSE connection) Set node for input and variable assignment Webhook and JSON for Instagram API structure --- ⚙️ Setup Instructions Create your Instagram App in Meta Developer Portal Add your Instagram ID and Access Token in the Set node Update the MCP Server Tool URL in the MCP Instagram node Use your n8n server URL (e.g. https://yourdomain.com/mcp/server/instagram/sse) Trigger the workflow using the included LangChain Chat Trigger Interact via text to ask the agent to: “Get latest posts” “Reply to comment X with this message” “Send DM to this user about...” --- 👥 Who this is for Social media teams managing multiple comments Brands automating engagement with followers Agencies creating smart, autonomous digital assistants Developers building conversational Instagram bots --- ✅ Requirements Meta Graph API access Instagram Business account n8n instance (Cloud or Self-hosted) MCP Server configured (SSE Endpoint enabled) OpenAI API Key (for GPT-4o + LangChain) --- 🌐 Want to use this workflow? ❤️ Buy workflows: https://iloveflows.com ☁️ Try n8n Cloud: https://n8n.partnerlinks.io/amanda
🗲 Creating a Secure Webhook - MUST HAVE
How it works This workflow demonstrates a fundamental pattern for securing a webhook by requiring an API key. It acts as a gatekeeper, checking for a valid key in the request header before allowing the request to proceed. Incoming Request: The Secured Webhook node receives an incoming POST request. It expects an API key to be sent in the x-api-key header. API Key Verification: The Check API Key node takes the key from the incoming request's header. It then makes an internal HTTP request to a second* webhook (Get API Key) which acts as a mock database. This second webhook retrieves a list of registered API keys (from the Registered API Keys node) and filters it to find a match for the key that was provided. Conditional Response: If a match is found, the API Key Identified node routes the execution to the "success" path, returning a 200 OK response with the identified user's ID. If no match is found, it routes to the "unauthorized" path, returning a 401 Unauthorized error. This pattern separates the public-facing endpoint from the data source, which is a good security practice. Set up steps Setup time: ~2 minutes This workflow is designed to be a self-contained example. Set up Credentials: This workflow uses "Header Auth" for its internal communication. Go to Credentials and create a new Header Auth credential. You can use any name and value (e.g., Name: X-N8N-Auth, Value: my-secret-password). Select this credential in all four webhook/HTTP Request nodes. Add Your API Keys: Open the Registered API Keys node. This is your mock database. Edit the array to include the userid and apikey pairs you want to authorize. Activate the workflow. Test it: Use the Test Secure Webhook node to send a request. Try it with a valid key from your list to see the success response. Change the x-api-key header to an invalid key to see the 401 Unauthorized error. For Production: Replace the mock database part of this workflow (the Get API Key webhook and Registered API Keys node) with a real database node like Supabase, Postgres, or Baserow to look up keys.
Create a table in Postgres and insert data
Companion workflow for Postgres node docs