Daily news digest: summarize RSS feeds with OpenAI and deliver to WhatsApp
This n8n workflow collects and summarizes news from multiple RSS feeds, using OpenAI to generate a concise summary that can be sent to WhatsApp or other destinations. Perfect for automating your daily news digest. ๐ Workflow Breakdown: Schedule Trigger Start the workflow on your desired schedule (daily, hourly, etc.). ๐จ Note: Set the trigger however you wish. RSS Feeds (My RSS 01โ04) Fetches articles from four different RSS sources. ๐จ Note: You can add as many RSS feeds as you want. Edit Fields (Edit Fields1โ3) Normalizes RSS fields (title, link, etc.) to ensure consistency across different sources. Merge (append mode) Combines the RSS items into a single unified list. Filter Optionally filter articles by keywords, date, or categories. Limit Limits the analysis to the 10 most recent articles. ๐จ Note: This keeps the result concise and avoids overloading the summary. Aggregate Prepares the selected news for summarization by combining them into a single content block. OpenAI (Message Assistant) Summarizes the aggregated news items in a clean and readable format using AI. Send Summary to WhatsApp Sends the AI-generated summary to a WhatsApp endpoint via webhook (yoururlapi.com). You can replace this with an email service, Google Drive, or any other destination. ๐จ Note: You can send it to your WhatsApp API, email, drive, etc. No Operation (End) Final placeholder to safely close the workflow. You may expand from here if needed.
Manipulate PDF with Adobe developer API
Adobe developer API Did you know that Adobe provides an API to perform all sort of manipulation on PDF files : Split PDF, Combine PDF OCR Insert page, delete page, replace page, reorder page Content extraction (text content, tables, pictures) ... The free tier allows up to 500 PDF operation / month. As it comes directly from Adobe, it works often better than other alternatives. Adobe documentation: https://developer.adobe.com/document-services/docs/overview/pdf-services-api/howtos/ https://developer.adobe.com/document-services/docs/overview/pdf-extract-api/gettingstarted/ What does this workflow do The API is a bit painful to use. To perform a transformation on a PDF it requires to Authenticate and get a temporal token Register a new asset (file) Upload you PDF to the registered asset Perform a query according to the transformation requested Wait for the query to be proccessed by Adobe backend Download the result This workflow is a generic wrapper to perform all these steps for any transformation endpoint. I usually use it from other workflow with an Execute Workflow node. Examples are given in the workflow. Example use case This service is useful for example to clean PDF data for an AI / RAG system. My favorite use-case is to extract table as images and forward images to an AI for image recognition / description which is often more accuarate than feedind raw tabular data to a LLM.
๐ Automate delivery confirmation with Telegram Bot, Google Drive and Gmail
Tags: Supply Chain Management, Logistics, Transportation Context Hey! I'm Samir, a Supply Chain Engineer and Data Scientist from Paris founder of LogiGreen Consulting We design tools to help small and medium businesses in improving their logistics processes using data analytics and automation. > Let's use N8N to make supply chains more efficient and sustainable Supply Chains! ๐ฌ For business inquiries, you can add me on Here Who is this template for? This workflow template is designed for logistics operations that cannot rely on a Transportation Management System to record proofs of deliveries. [](https://youtu.be/9NS4RYaOwJ8) What is a delivery confirmation? This workflow uses a Telegram bot to automatically notify logistics teams by email when a shipment is delivered. [](https://youtu.be/9NS4RYaOwJ8) Drivers (equipped with their smartphones) can record their arrival with all the necessary information for accurate distribution planning. How do we notify the delivery? Let us imagine a truck driver arriving at the destination; he can contact the bot to be guided on how to record the delivery. [](https://youtu.be/9NS4RYaOwJ8) User Guide: the first bot's message is a brief explanation of the process Record Shipment Number: the bot asks the driver to share the shipment number and record it Collect GPS Location: the bot asks the driver to share its GPS location and record them Picture of the Shipment: the bot asks for a picture of the shipment and saves it in Google Drive Send Confirmation: after data collection, the bot proposes to send a confirmation to the logistics management team An email is then automatically sent by the N8N workflow including all the information recorded by the flow and a picture of the shipment. [](https://youtu.be/9NS4RYaOwJ8) Prerequisite This workflow does not require any additional paying subscription. A Google Drive Account with a folder including a Google Sheet API Credentials: Google Drive API, Google Sheets API and Gmail API A Telegram Bot with its API token from BotFather A Google sheet to store the shipment records with these five columns prepared: shipmentNumber, recordTime, gpsLattitude, gpsLongitude, cargoPicture, deliveryTime Next Steps Follow the sticky notes to set up the parameters inside each node and get ready to improve your logistics operations! I have detailed the steps in a short tutorial ๐ [](https://youtu.be/9NS4RYaOwJ8) ๐ฅ Check My Tutorial ๐ Interested in applications of N8N for Logistics & Supply Chain Management? Let's connect on Linkedin Notes This workflow can be adapted to add more functionalities. I explain how in the video. The bot can handle multiple drivers at the same time. If you want to learn more about the original tool designed with Python: ๐ Blog Article about Telegram Shipment Tracking Bot This workflow has been created with N8N 1.82.1 Submitted: March 17th, 2025
Exponential backoff for Google APIs
n8n Workflow: Exponential Backoff for Google APIs Overview This n8n workflow implements an Exponential Backoff mechanism to handle retries when interacting with Google APIs. It ensures that failed API requests are retried with increasing delays, up to a specified maximum retry count. This approach helps mitigate transient errors (e.g., rate limits or temporary network issues) while maintaining workflow efficiency. Key Features: Exponential Backoff Logic: Dynamically increases wait time between retries based on the retry count. Error Handling: Stops the workflow and raises an error after a specified number of retries. Dynamic Waiting: Waits for a calculated duration before each retry. Scalable Design: Modular nodes for easy debugging and customization. --- Workflow Details Nodes in the Workflow: Trigger (When clicking "Test Workflow"): Manually starts the workflow for testing. Loop Over Items: Iterates over multiple input items to process Google API requests row by row. Google API Node (Example: Update Sheet): Sends a request to a Google API endpoint (e.g., updating a row in Google Sheets). On success: Moves to the next item in the loop. On error: Passes the error to the Exponential Backoff node. Exponential Backoff: Calculates the delay for the next retry based on the retry count. Logic: javascript const retryCount = $json["retryCount"] || 0; const maxRetries = 5; const initialDelay = 1; // in seconds if (retryCount < maxRetries) { const currentDelayInSeconds = initialDelay * Math.pow(2, retryCount); return { json: { retryCount: retryCount + 1, waitTimeInSeconds: currentDelayInSeconds, status: 'retrying', } }; } else { return { json: { error: 'Max retries exceeded', retryCount: retryCount, status: 'failed' } }; } Wait: Dynamically waits for the waitTimeInSeconds value calculated in the Exponential Backoff node. Configuration: Resume: After Time Interval Wait Amount: {{ $json["waitTimeInSeconds"] }} Unit: Seconds Check Max Retries: Evaluates whether the retry count has exceeded the maximum limit. Routes the workflow: True: Passes to the Stop and Error node. False: Loops back to the Google API node for retry. Stop and Error: Stops the workflow and logs the error when the maximum retry count is reached. --- Parameters Configurable Settings: Max Retries: Defined in the Exponential Backoff node (const maxRetries = 5). Adjust this value based on your requirements. Initial Delay: The starting wait time for retries, defined as 1 second. Google API Configuration: Ensure your Google API node is properly authenticated and configured with the desired endpoint and parameters. --- How to Use Import the Workflow: Copy the workflow JSON and import it into your n8n instance. Configure Google API Node: Set up the Google API node with your credentials and target API endpoint (e.g., Google Sheets, Gmail, etc.). Test the Workflow: Manually trigger the workflow and observe the retry behavior in case of errors. Monitor Logs: Use the console logs in the Exponential Backoff node to debug retry timings and status. --- Example Scenarios Scenario 1: Successful Execution The Google API processes all requests without errors. Workflow completes without triggering the retry logic. Scenario 2: Transient API Errors The Google API returns an error (e.g., 429 Too Many Requests). The workflow retries the request with increasing wait times. Scenario 3: Maximum Retries Exceeded The workflow reaches the maximum retry count (e.g., 5 retries). An error is raised, and the workflow stops. --- Considerations Jitter: This workflow does not implement jitter (randomized delay) since it's not required for low-volume use cases. If needed, jitter can be added to the exponential backoff calculation. Retry Storms: If multiple workflows run simultaneously, ensure your API quotas can handle potential retries. Error Handling Beyond Max Retries: Customize the Stop and Error node to notify stakeholders or log errors in a centralized system. --- Customization Options Adjust the maximum retry limit and delay calculation to suit your use case. Add additional logic to handle specific error codes differently. Extend the workflow to notify stakeholders when an error occurs (e.g., via Slack or email). --- Troubleshooting Retry Not Triggering: Ensure the retryCount variable is passed correctly between nodes. Confirm that the error output from the Google API node flows to the Exponential Backoff node. Incorrect Wait Time: Verify the Wait node is referencing the correct field for waitTimeInSeconds. --- Request for Feedback We are always looking to improve this workflow. If you have suggestions, improvements, or ideas for additional features, please feel free to share them. Your feedback helps us refine and enhance this solution!
๐ค Inventory ABC & Pareto analysis with Google Sheets for supply chain
Tags: Supply Chain, Inventory Management, ABC Analysis, Pareto Principle, Demand Variability, Automation, Google Sheets Context Hi! Iโm Samir Saci, a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies optimise inventory and logistics operations by combining data analytics and workflow automation. This workflow is part of our inventory optimisation toolkit, allowing businesses to perform ABC classification and Pareto analysis directly from their transactional sales data. > Automate inventory segmentation with n8n! ๐ฌ For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for supply chain analysts, demand planners, or inventory managers who want to: Identify their top-performing items (Pareto 80/20 principle) Classify products into ABC categories based on sales contribution Evaluate demand variability (XYZ classification support) Imagine you have a Google Sheet where daily sales transactions are stored: [](https://www.youtube.com/watch?v=YbAA-cq9X_E) The workflow aggregates sales by item, calculates cumulative contribution, and assigns A, B, or C classes. It also computes mean, standard deviation, and coefficient of variation (CV) to highlight demand volatility. [](https://www.youtube.com/watch?v=YbAA-cq9X_E) How does it work? This workflow automates the process of ABC & Pareto analysis from raw sales data: ๐ Google Sheets input provides daily transactional sales ๐งฎ Aggregation & code nodes compute sales, turnover, and cumulative shares ๐ง ABC class mapping assigns items into A/B/C buckets ๐ Demand variability metrics (XYZ) are calculated ๐ Results are appended into dedicated Google Sheets tabs for reporting [](https://www.youtube.com/watch?v=YbAA-cq9X_E) ๐ฅ Watch My Tutorial Steps: ๐ Load daily sales records from Google Sheets ๐ Filter out items with zero sales ๐ Aggregate sales by store, item, and day ๐ Perform Pareto analysis to calculate cumulative turnover share ๐งฎ Compute demand variability (mean, stdev, CV) ๐ง Assign ABC classes based on cumulative share thresholds ๐ฅ Append results into ABC XYZ and Pareto output sheets [](https://www.youtube.com/watch?v=YbAA-cq9X_E) What do I need to get started? Youโll need: A Google Sheet with sales transactions (date, item, quantity, turnover) that is available here: Test Sheet A Google Sheets account connected in n8n Basic knowledge of inventory analysis (ABC/XYZ) Next Steps ๐๏ธ Use the sticky notes in the n8n canvas to: Add your Google Sheets credentials Replace the Sheet ID with your own sales dataset Run the workflow and check the output tabs: ABC XYZ, Pareto, and Store Sales This template was built using n8n v1.107.3 Submitted: September 15, 2025
PostgreSQL conversational agent with Claude & DeepSeek (Multi-KPI, Secure)
๐ง Conversational PostgreSQL Agent Enable AI-driven conversations with your PostgreSQL database using a secure and visual-free agent powered by n8nโs Model Context Protocol (MCP). This template allows users to ask multiple KPIs in a single message, returning consolidated insights โ more efficient than the original Conversing with Data template. --- ๐ Why This Template Unlike the Conversing with Data workflow, which handles one KPI per message, this version: โ Supports multi-KPI questions โ Returns structured, human-readable reports โ Uses fewer AI calls, making it faster and cheaper โ Avoids raw SQL execution for enhanced security ๐ฒ Estimated cost per full multi-request run: ~$0.01 This template is optimized for efficiency. Each message can return 2โ4 KPIs (You can change the MaxIteration of the Agent to make it more, it is currently set up at 30 iterations) using a single Claude 3.5 Haiku session and DeepSeek-based SQL generation โ balancing speed, reasoning, and affordability. --- ๐ฌ Sample Use Case User: โCan you show product performance, revenue trends, and top 5 customers?โ Agent: Uses ListTables and GetTableSchema Generates three SQL queries using getqueryand_data Returns: ๐ Product Performance High-Waist Jeans โ 10 units, $1,027 revenue Denim Jacket โ 10 units, $783 revenue ๐ Sales Trends Peak Month: January 2024 โ 32 units, $2,378 Average Monthly Units: 10โ16 ๐ง Customer Insights Bob Brown โ $1,520 spent Diana Wilson โ $925 spent All from one natural prompt. --- ๐ผ๏ธ Real-World Interaction Screenshot --- ๐งฐ Whatโs Inside | Node | Purpose | |----------------------------|-----------------------------------------------------------| | MCP Server Trigger | Receives user queries via /mcp/... | | AI Agent + Memory | Understands and plans multi-step queries | | Think Tool | Breaks down the userโs question into structured goals | | getqueryand_data | Generates SQL securely from natural language | | ListTables, GetSchema | AI tools to explore DB safely | | Read/Insert/Update Tools | Execute structured operations (never raw SQL) | | checkdatabase Subflow | Validates SQL, formats response as clean text | --- ๐ค Model Selection Recommendations This template uses two types of models, selected for cost-performance balance and role alignment: Claude 3.5 Haiku (Anthropic) โ for the MCP Agent The main conversational agent uses Claude 3.5 Haiku, ideal for MCP because it was built by Anthropic โ the creators of the MCP standard. Itโs fast, affordable, and performs excellently in tool-calling and reasoning tasks. DeepSeek โ for the SQL subworkflow The subworkflow that turns natural language into SQL uses DeepSeek. Itโs one of the most affordable and performant models available today for structured outputs like SQL, making it a perfect fit for utility logic. โ This setup provides top-tier reasoning + low-cost execution. --- ๐ Security Benefits No raw SQL accepted from the user or LLM All queries are parameterized Schema is dynamically retrieved Final output is clean, safe, and human-readable --- ๐งช Try a Prompt > โShow me the top 5 products by units sold and revenue, total monthly sales trend, and top 5 customers by spending.โ In one message, the agent will: Generate and run multiple queries Use the schema to validate logic Return a single, comprehensive answer --- ๐ How to Use ๐ฅ Upload both workflow files into your n8n instance: BuildyourownPostgreSQLMCPserverNovisuals.json checkdatabase.json ๐ Set up PostgreSQL credentials (e.g. โPostgres account 3โ) ๐ง Confirm model setup: Claude 3.5 Haiku for the main agent DeepSeek for the subflow ๐ Use the /mcp/... URL from the MCP Server Trigger to connect your frontend or chatbot ๐ฃ Ask questions naturally โ the agent takes care of planning, querying, and formatting --- ๐ Customization Ideas Swap Claude or DeepSeek for OpenAI, Mistral, Gemini, etc. Export insights to Slack, Notion, or Google Sheets Add Switch nodes to control access to specific tables Integrate with any front-end app, internal dashboard, or bot --- ๐ฆ What's Included BuildyourownPostgreSQLMCPserverNovisuals.json โ MCP agent logic checkdatabase.json โ SQL generation and formatting utility workflow ๐ These must be uploaded into your n8n workspace for the template to function. --- ๐ Comparison: Conversing with Data vs This Workflow | Feature | Conversing with Data | This Workflow | |------------------------------|---------------------------|---------------------------------| | Handles multi-KPI questions | โ No | โ Yes | | Secure query execution | โ Yes | โ Yes | | Structured response | โ ๏ธ JSON / raw | โ Clean natural language | | Cost-efficiency | โ ๏ธ More calls | โ Optimized with fewer calls | | Endpoint support | โ Manual interaction | โ MCP-ready (/mcp/...) | ๐ Prefer something more lightweight and cost-sensitive? Try the original Conversing with Data template (single KPI + chart support): Conversing with Data: Transforming Text into SQL Queries and Visual Curves > I used this version for over 3 months and only spent $0.80 total, making it a great entry point if you're just getting started or on a limited budget. --- ๐ More from the Same Creator Looking for a different kind of AI reporting workflow? Explore: Customer Feedback Analysis with AI, QuickChart & HTML Report Generator โ Automatically analyze customer input and generate full reports with insights and charts. Customer Feedback Analysis with AI, QuickChart & HTML Report Generator
Generate short-form clips from YouTube videos with GPT-4o, Grok & Airtable
This n8n template demonstrates how to automate YouTube content repurposing using AI. Upload a video to Google Drive and automatically generate transcriptions, A/B testable titles, AI thumbnails, short-form clips with captions, and YouTube descriptions with chapter timestamps. Use cases include: Content creators who publish 1-2 long-form videos per week and need to extract 5-10 short-form clips, YouTube agencies managing multiple channels, or automation consultants building content systems for clients. Good to know Processing time is approximately 10-15 minutes per video depending on length Cost per video is roughly $1.00 (transcription $0.65, AI generation $0.35) YouTube captions take 10-60 minutes to generate after upload - the workflow includes automatic polling to check when captions are ready Manual steps still required: video clipping (using provided timestamps), social media posting, and YouTube A/B test setup How it works When a video is uploaded to Google Drive, the workflow automatically triggers and creates an Airtable record The video URL is sent to AssemblyAI (via Apify) for transcription with H:MM:SS.mmm timestamps GPT-4o-mini analyzes the transcript and generates 3 title variations optimized for A/B testing When you click "Generate thumbnail" in Airtable, your prompt is optimized and sent to Kie.ai's Nano Banana Pro model with 2 reference images for consistent branding After uploading to YouTube, the workflow polls YouTube's API every 5 minutes to check if auto-generated captions are ready Once captions are available, click "Generate clips" and Grok 4.1 Fast analyzes the transcript to identify 3-8 elite clips (45+ seconds each) with proper start/end boundaries and action-oriented captions GPT-4o-mini generates a YouTube description with chapter timestamps based on the transcript All outputs are saved to Airtable: titles, thumbnail, clip timestamps with captions, and description How to use Duplicate the provided Airtable base template and connect it to your n8n instance Create a Google Drive folder for uploading edited videos After activating the workflow, copy webhook URLs and paste them into Airtable button formulas and automations Upload your edited video to the designated Google Drive folder to trigger the system The workflow automatically generates titles and begins transcription Add your thumbnail prompt and 2 reference images to Airtable, then click "Generate thumbnail" Upload the video to YouTube as unlisted, paste the video ID into Airtable, and check the box to trigger clip generation Use the provided timestamps to manually clip videos in your editor Copy titles, thumbnail, clips, and description from Airtable to publish across platforms Requirements Airtable account (Pro plan recommended for automations) Google Drive for video upload monitoring Apify account for video transcription via AssemblyAI actor OpenAI API key for title and description generation (GPT-4o-mini) OpenRouter API key for clip identification (Grok 4.1 Fast) Kie.ai account for AI thumbnail generation (Nano Banana Pro model) YouTube Data API credentials for caption polling Customising this workflow Tailor the system prompts to your content niche by asking Claude to adjust them without changing the core structure Modify the clip identification criteria (length, caption style, number of clips) in the Grok prompt Adjust thumbnail generation style by updating the image prompt optimizer Add custom fields to Airtable for tracking performance metrics or additional metadata Integrate with additional platforms like TikTok or Instagram APIs for automated posting
Ai-optimized travel itinerary generator with Skyscanner, Booking.com and Gmail
Introduction Automates travel planning by aggregating flights, hotels, activities, and weather via APIs, then uses AI to generate professional itineraries delivered through Gmail and Slack. How It Works Webhook receives requests, searches APIs (Skyscanner, Booking.com, Kiwi, Viator, weather), merges data, AI builds itineraries, scores options, generates HTML emails, delivers via Gmail/Slack. Workflow Template Webhook โ Extract โ Parallel Searches (Flights/Hotels/Activities/Weather) โ Merge โ Build Itinerary โ AI Processing โ Score โ Generate HTML โ Gmail โ Slack โ Response Workflow Steps Trigger & Extract: Receives destination, dates, preferences, extracts parameters. Data Gathering: Parallel APIs fetch flights, hotels, activities, weather, merges responses. AI Processing: Analyzes data, creates itinerary, ranks recommendations. Delivery: Generates HTML email, sends via Gmail/Slack, confirms completion. Setup Instructions API Configuration: Add keys for Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter. Communication: Connect Gmail OAuth2, Slack webhook. Customization: Adjust endpoints, AI prompts, HTML template, scoring criteria. Prerequisites API keys: Skyscanner, Booking.com, Kiwi, Viator, OpenWeatherMap, OpenRouter Gmail account Slack workspace n8n instance Use Cases Corporate travel planning Vacation itinerary generation Group trip coordination Customization Add sources (Airbnb, TripAdvisor) Filter by budget preferences Add PDF generation Customize Slack format Benefits Saves 3-5 hours per trip Real-time pricing aggregation AI-powered personalization Automated multi-channel delivery
WhatsApp support bot with keyword routing & GPT-4.1-mini responses
Description: Deliver instant answers and automate customer support on WhatsApp with this intelligent n8n workflow template! The system routes incoming messages using keyword-based logic and provides dynamic, AI-powered responses for greetings, FAQs, and complex queriesโensuring your customers always get the right reply without manual effort. This automation is designed for businesses, service providers, and support teams who want to streamline WhatsApp engagement, reduce manual workload, and provide consistent, conversational answers that scale with demand. What This Template Does (Step-by-Step): ๐ฒ Capture Incoming WhatsApp Messages Triggers on every new message received via WhatsApp API. ๐ Keyword-Based Routing Sequential IF conditions check for predefined keywords (e.g., โhiโ, โpricingโ, โsupportโ). ๐ฌ Send Tailored Keyword Responses Returns fast, pre-written responses for greetings, FAQs, or common scenarios. ๐ค AI-Powered Fallback with OpenAI Chat Model For advanced or unrecognized queries, the workflow generates context-aware, conversational answers using AI. ๐ Deliver Automated Replies in Real Time Replies are instantly sent back to WhatsApp for seamless customer communication. ๐ Optional: Conversation Logging Extend the template to log chats in Notion, Airtable, or your CRM for tracking and insights. Perfect For: Customer support teams handling repetitive queries Businesses wanting instant replies for FAQs & greetings Service providers delivering personalized, scalable engagement Anyone looking to combine rule-based automation with AI intelligence Built With: WhatsApp API (message triggers & replies) n8n IF Node (keyword routing) OpenAI Chat Model (AI fallback for complex queries) Extendable storage (Notion, Google Sheets, Airtable, etc.) Key Benefits: โ Faster, automated customer support on WhatsApp ๐ Accurate, human-like replies for complex questions ๐ง Hybrid system: keyword rules + AI intelligence ๐ Centralized chat logging for insights (optional) ๐ 100% no-code and customizable in n8n
Voice-driven AI assistant using VAPI and GPT-4.1-mini with memory
Send VAPI voice requests into n8n with memory and OpenAI for conversational automation This template shows how to capture voice interactions from VAPI (Voice AI Platform), send them into n8n via a webhook, process them with OpenAI, and maintain context with memory. The result is a conversational AI agent that responds back to VAPI with short, business-focused answers. --- โ What this template does Listens for POST requests from VAPI containing the session ID and user query Extracts session ID and query for consistent conversation context Uses OpenAI (GPT-4.1-mini) to generate conversational replies Adds Memory Buffer Window so each VAPI session maintains history Returns results to VAPI in the correct JSON response format --- ๐ค Whoโs it for Developers and consultants building voice-driven assistants Businesses wanting to connect VAPI calls into automation workflows Anyone who needs a scalable voice โ AI โ automation pipeline --- โ๏ธ How it works Webhook node catches incoming VAPI requests Set node extracts sessionid and userquery from the request body OpenAI Agent generates short, conversational replies with your business context Memory node keeps conversation history across turns Respond to Webhook sends results back to VAPI in the required JSON schema --- ๐ง Setup instructions Step 1: Create Function Tool in VAPI In your VAPI dashboard, create a new Function Tool Name: sendton8n Description: Send user query and session data to n8n workflow Parameters: session_id (string, required) โ Unique session identifier user_query (string, required) โ The userโs question Server URL: https://your-n8n-instance.com/webhook/vapi-endpoint Step 2: Configure Webhook in n8n Add a Webhook node Set HTTP method to POST Path: /webhook/vapi-endpoint Save, activate, and copy the webhook URL Use this URL in your VAPI Function Tool configuration Step 3: Create VAPI Assistant In VAPI, create a new Assistant Add the sendton8n Function Tool Configure the assistant to call this tool on user requests Test by making a voice query โ you should see n8n respond --- ๐ฆ Requirements An OpenAI API key stored in n8n credentials A VAPI account with access to Function Tools A self-hosted or cloud n8n instance with webhook access --- ๐ Customization Update the system prompt in the OpenAI Agent node to reflect your brandโs voice Swap GPT-4.1-mini for another OpenAI model if you need longer or cheaper responses Extend the workflow by connecting to CRMs, Slack, or databases --- ๐ฌ Contact Need help customizing this (e.g., filtering by campaign, connecting to CRMs, or formatting reports)? ๐ง rbreen@ynteractive.com ๐ https://www.linkedin.com/in/robert-breen-29429625/ ๐ https://ynteractive.com
Generate AI voiceovers from scripts with Gemini TTS and upload to Google Drive
Generate AI Voiceovers from Scripts and Upload to Google Drive This is the final piece of the AI content factory. This workflow takes your text-based video scripts and automatically generates high-quality audio voiceovers for each one, turning your text into ready-to-use audio assets for your video ads. Go from a spreadsheet of text to a folder of audio files, completely on autopilot. โ ๏ธ CRITICAL REQUIREMENTS (Read First!) This is an advanced, self-hosted workflow that requires specific local setup: Self-Hosted n8n Only: This workflow uses the Execute Command and Read/Write Files nodes, which requires you to run your own instance of n8n. It will not work on n8n Cloud. FFmpeg Installation: You must have FFmpeg installed on the same machine where your n8n instance is running. This is used to convert the audio files to a standard format. What it does This is Part 3 of the AI marketing series. It connects to the Google Sheet where you generated your video scripts (in Part 2). For each script that hasn't been processed, it: Uses the Google Gemini Text-to-Speech (TTS) API to generate a voiceover. Saves the audio file to your local computer. Uses FFmpeg to convert the raw audio into a standard .wav file. Uploads the final .wav file to your Google Drive. Updates the original Google Sheet with a link to the audio file in Drive and marks the script as complete. How to set up IMPORTANT: This workflow is Part 3 of a series and requires the output from Part 2 ("Generate AI Video Ad Scripts"). If you need Part 1 or Part 2 of this workflow series, you can find them for free on my n8n Creator Profile. Connect to Your Scripts Sheet: In the "Getting Video Scripts" node, connect your Google Sheets account and provide the URL to the sheet containing your generated video scripts from Part 2. Configure AI Voice Generation (HTTP Request): In the "HTTP Request To Generate Voice" node, go to the Query Parameters and replace INSERT YOUR API KEY HERE with your Google Gemini API key. In the JSON Body, you can customize the voice prompt (e.g., change <INSERT YOUR DESIRED ACCENT HERE>). Set Your Local File Path: In the first "Read/Write Files from Disk" node, update the File Name field to a valid directory on your local machine where n8n has permission to write files. Replace /Users/INSERTYOURLOCALSTORAGEHERE/. Connect Google Drive: In the "Uploading Wav File" node, connect your Google Drive account and choose the folder where your audio files will be saved. Update Your Tracking Sheet: In the final "Uploading Google Drive Link..." node, ensure it's connected to the same Google Sheet from Step 1. This node will update your sheet with the results. Name and Description for Submission Form Here are the name and description, updated with the new information, ready for you to copy and paste. Name: Generate AI Voiceovers from Scripts and Upload to Google Drive Description: Welcome to the final piece of the AI content factory! ๐ This advanced workflow takes the video ad scripts you've generated and automatically creates high-quality audio voiceovers for each one, completing your journey from strategy to ready-to-use media assets. โ ๏ธ This is an advanced workflow for self-hosted n8n instances only and requires FFmpeg to be installed locally. โ๏ธ How it works This workflow is Part 3 of a series. It reads your video scripts from a Google Sheet, then for each script it: Generates a voiceover using the Google Gemini TTS API. Saves the audio file to your local machine. Converts the file to a standard .wav format using FFmpeg. Uploads the final audio file to Google Drive. Updates your Google Sheet with a link to the new audio file. ๐ฅ Whoโs it for? Video Creators & Marketers: Mass-produce voiceovers for video ads, tutorials, or social media content without hiring voice actors. Automation Power Users: A powerful example of how n8n can bridge cloud APIs with local machine commands. Agencies: Drastically speed up the production of audio assets for client campaigns. ๐ ๏ธ How to set up This workflow requires specific local setup due to its advanced nature. IMPORTANT: This is Part 3 of a series. To find Part 1 ("Generate a Strategic Plan") and Part 2 ("Generate Video Scripts"), please visit my n8n Creator Profile where they are available for free. Setup involves connecting to your scripts sheet, configuring the AI voice API, setting a local file path for n8n to write to, and connecting your Google Drive.
Create AI intelligence briefs from newsletters with Gemini, Slack, and Notion
This n8n template automatically processes your industry newsletters and creates AI-powered intelligence briefs that filter signal from noise. Perfect for busy professionals who need to stay informed without information overload, it delivers structured insights directly to Slack while optionally saving content questions to Notion. Who's it for Busy executives, product managers, and content creators at growing companies who subscribe to multiple industry newsletters but lack time to read them all. Ideal for teams that need to spot trends, generate content ideas, and share curated insights without drowning in information. How it works The workflow runs daily to fetch labeled emails from Gmail, combines all newsletter content, and sends it to an AI agent for intelligent analysis. The AI filters developments through your specific business lens, identifies only operationally relevant insights, and generates thought-provoking questions for content creation. Results are formatted as rich Slack messages using Block Kit, with optional Notion integration for tracking content ideas. Requirements Gmail account with newsletter labeling system OpenRouter API key for AI analysis (costs approximately $0.01-0.05 per run) or API key for a specific LLM Slack workspace with bot permissions for message posting Notion account with database setup (optional, for content question tracking) Perplexity API key (optional, for additional AI research capabilities) How to set up 1 Connect your Gmail, OpenRouter, and Slack credentials through n8n's secure credential system. Create a Gmail label for newsletters you want analyzed and setup in the "Get Labeled Newsletters" node. Update the Slack channel ID in the "Send to Slack" node. The template comes pre-configured with sample settings for tech companies, so you can run it immediately after credential setup. How to customize the workflow Edit the "Configuration" node to match your industry and audience - change the 13 pre-defined fields including target audience, business context, relevance filters, and content pillars. Adjust the cron expression in the trigger node for your timezone. Modify the Slack formatting code to change output appearance, or add additional destination nodes for email, Teams, or Discord. Remove Notion nodes if you only need Slack output. The AI analysis framework is fully customizable through the Configuration node, allowing you to adapt from the default tech company focus to any industry including healthcare, finance, marketing, or consulting.