Automate Instagram & Facebook posting with Meta Graph API & System User Tokens
This template automates posting to Instagram Business and Facebook Pages using the Meta Graph API. It supports both short-lived and long-lived tokens, with a secure approach using System User tokens for reliable, ongoing automation. Includes detailed guidance for authentication, token refresh logic, and API use. Features: 📸 Publish to Instagram via /media + /media_publish 📘 Post to Facebook Pages via /photos 🔐 Long-lived token support via Meta Business System User ♻️ Token refresh support using staticData in n8n 🧠 In-line sticky note instructions Use Cases: Schedule and publish branded social media content Automate marketing flows with CRM + social sync Empower internal teams or clients to post without manual steps Tags: Instagram, Facebook, Meta Graph API, Social Media, Token Refresh, Long-Lived Token, Marketing Automation, System User
Send daily weather forecasts from OpenWeatherMap to Telegram with smart formatting
🌤️ Daily Weather Forecast Bot A comprehensive n8n workflow that fetches detailed weather forecasts from OpenWeatherMap and sends beautifully formatted daily summaries to Telegram. 📋 Features 📊 Daily Overview: Complete temperature range, rainfall totals, and wind conditions ⏰ Hourly Forecast: Weather predictions at key times (9AM, 12PM, 3PM, 6PM, 9PM) 🌡️ Smart Emojis: Context-aware weather icons and temperature indicators 💡 Smart Recommendations: Contextual advice (umbrella alerts, clothing suggestions, sun protection) 🌪️ Enhanced Details: Feels-like temperature, humidity levels, wind speed, UV warnings 📱 Rich Formatting: HTML-formatted messages with emojis for excellent readability 🕐 Timezone-Aware: Proper handling of Luxembourg timezone (CET/CEST) 🛠️ What This Workflow Does Triggers daily at 7:50 AM to send morning weather updates Fetches 5-day forecast from OpenWeatherMap API with 3-hour intervals Processes and analyzes weather data with smart algorithms Formats comprehensive report with HTML styling and emojis Sends to Telegram with professional formatting and actionable insights ⚙️ Setup Instructions OpenWeatherMap API Sign up at OpenWeatherMap Get your free API key (1000 calls/day included) Replace API_KEY in the HTTP Request node URL Telegram Bot Message @BotFather on Telegram Send /newbot command and follow instructions Copy the bot token to n8n credentials Get your chat ID by messaging the bot, then visiting: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Update the chatId parameter in the Telegram node Location Configuration Default location: Strassen, Luxembourg To change: modify q=Strassen in the HTTP Request URL Format: q=CityName,CountryCode (e.g., q=Paris,FR) 🎯 Technical Details API Source: OpenWeatherMap 5-day forecast Schedule: Daily at 7:50 AM (configurable) Format: HTML with rich emoji formatting Error Handling: 3 retry attempts with 5-second delays Rate Limits: Uses only 1 API call per day Timezone: Europe/Luxembourg (handles CET/CEST automatically) 📊 Weather Data Analyzed Temperature ranges and "feels like" temperatures Precipitation forecasts and accumulation Wind speed and conditions Humidity levels and comfort indicators Cloud coverage and visibility UV index recommendations Time-specific weather patterns 💡 Smart Features Conditional Recommendations: Only shows relevant advice Night/Day Awareness: Different emojis for time of day Temperature Context: Color-coded temperature indicators Weather Severity: Appropriate icons for weather intensity Humidity Comfort: Comfort level indicators Wind Analysis: Descriptive wind condition text 🔧 Customization Options Schedule: Modify trigger time in the Schedule node Location: Change city in HTTP Request URL Forecast Hours: Adjust desiredHours array in the code Temperature Thresholds: Modify emoji temperature ranges Recommendation Logic: Customize advice triggers 📱 Sample Output 🌤️ Weather Forecast for Strassen, LU 📅 Monday, 2 June 2025 📊 Daily Overview 🌡️ Range: 12°C - 22°C 💧 Comfortable (65%) ⏰ Hourly Forecast 🕒 09:00 ☀️ 15°C 🕒 12:00 🌤️ 20°C 🕒 15:00 ☀️ 22°C (feels 24°C) 🕒 18:00 ⛅ 19°C 🕒 21:00 🌙 16°C 📡 Data from OpenWeatherMap | Updated: 07:50 CET 🚀 Getting Started Import this workflow to your n8n instance Add your OpenWeatherMap API key Set up Telegram bot credentials Test manually first Activate for daily automated runs 📋 Requirements n8n instance (cloud or self-hosted) Free OpenWeatherMap API account Telegram bot token Basic understanding of n8n workflows --- Perfect for: Daily weather updates, team notifications, personal weather tracking, smart home automation triggers.
Convert a date from one format to another
Companion workflow for Date & Time node docs
Scrape jobs from Indeed, LinkedIn & more with RapidAPI to Google Sheets
Active Job Scraper Workflow Using RapidAPI Jobs Search Realtime Data API This powerful Active Job Scraper workflow uses the RapidAPI Jobs Search Realtime Data API to fetch real-time job listings from leading job boards like Indeed, LinkedIn, ZipRecruiter, and Glassdoor. --- Overview Leverage the Jobs Search Realtime Data API on RapidAPI to gather fresh job data from Indeed, LinkedIn, ZipRecruiter, and Glassdoor. This n8n workflow lets you: Search jobs by location, keywords, job type, and remote options across these major platforms. Collect detailed job information including descriptions and metadata. Automatically save the scraped results into Google Sheets for easy tracking and analysis. --- Why Choose This Workflow? By integrating the RapidAPI Jobs Search Realtime Data API, you can scrape job listings from the most popular job sites—Indeed, LinkedIn, ZipRecruiter, and Glassdoor—all in one place. Customize your search parameters and get results tailored to your needs. --- Workflow Components | Node | Description | |------------------|-----------------------------------------------------------------| | Form Trigger | Collects input such as location, search term, job type, and remote status. | | HTTP Request | Calls the RapidAPI Jobs Search Realtime Data API to fetch jobs from Indeed, LinkedIn, ZipRecruiter, and Glassdoor. | | Code Node | Processes and formats the API response data. | | Google Sheets | Appends the extracted job listings to your spreadsheet. | --- 🔑 How to Get API Key from RapidAPI Jobs Search Realtime Data API Follow these steps to get your API key and start using it in your workflow: Visit the API Page 👉 Click here to open Jobs Search Realtime Data API on RapidAPI Log in or Sign Up Use your Google, GitHub, or email account to sign in. If you're new, complete a quick sign-up. Subscribe to a Pricing Plan Go to the Pricing tab on the API page. Select a plan (free or paid, depending on your needs). Click Subscribe. Access Your API Key Navigate to the Endpoints tab. Look for the X-RapidAPI-Key under Request Headers. Copy the value shown — this is your API key. Use the Key in Your Workflow In your n8n workflow (HTTP Request node), replace: text "x-rapidapi-key": "your key" with: text "x-rapidapi-key": "YOURACTUALAPI_KEY" ---
Ai-powered automated job search & application
Unleash the power of AI to automate your job search, tailor your applications, and boost your chances of landing your dream job! This comprehensive workflow handles everything from finding relevant job postings to generating personalized resumes and cover letters. Use cases are many: Automate your entire job application process: Spend less time searching and more time preparing for interviews. Tailor your resume and cover letter for every application: Maximize your ATS compatibility and stand out to recruiters. Efficiently track your applications: Keep all your job search activities organized in one place. Discover new job opportunities: Leverage the Adzuna API to find relevant listings. --- Good to know: Free Adzuna API: This workflow utilizes the free* Adzuna API, making job search capabilities accessible without initial cost. OpenRouter Chat Model Costs: AI model usage (for resume rewriting and cover letter generation) will incur costs based on the OpenRouter pricing model. Please check OpenRouter's official website for updated pricing information. Model Availability: The AI models used may have geo-restrictions. If you encounter a "model not found" error, it might not be available in your country or region. --- How it works: Webhook Trigger: The workflow is initiated via a webhook, allowing you to trigger it manually or integrate it with other systems (e.g., a form submission with your desired job title and resume). Resume Extraction: Your uploaded resume (e.g., PDF) is automatically extracted into a readable text format. Job Search (Adzuna API): Using the provided job title, the workflow queries the Adzuna API to fetch relevant job postings. Job Filtering: Duplicate job listings are filtered out to ensure you receive unique opportunities. Job Info Extraction: Key details like job description, company name, and job URL are extracted from each posting. Skills Extraction (AI): An AI model (OpenRouter) analyzes the job description to identify the top skills and qualifications required. Resume Match Scoring (AI): Your resume is compared against the extracted job skills by an AI model, generating a compatibility score (1-5). Conditional Resume & Cover Letter Generation: If the resume match score is satisfactory (≥ 3): Tailored Resume Generation (AI): An AI model rewrites your resume, specifically highlighting the skills and experience most relevant to the target job, in an ATS-friendly and human-readable JSON/HTML format. Personalized Cover Letter Generation (AI): A custom cover letter is drafted by AI, uniquely tailored to the job description and your newly optimized resume, generated as well-formatted HTML. Google Sheets Integration: The generated cover letter, tailored resume, job URL, and application status are automatically updated in your designated Google Sheet for easy tracking. Gmail Notification: A personalized email containing the generated cover letter, tailored resume, and a direct link to the job posting on Adzuna is sent to your specified email address. Webhook Response: A final text response is sent back via the webhook, summarizing the sent application materials. --- How to use: Manual Trigger: The workflow is set up with a manual trigger (Webhook) for initial testing and demonstration. You can easily replace this with an n8n form, a scheduled trigger, or integrate it into your existing tools. Input: Provide your desired job search keyword and your resume (e.g., as a PDF) to the webhook. Review & Apply: Review the AI-generated cover letter and tailored resume sent to your email, then proceed to apply for the job using the provided Adzuna link. --- Requirements: n8n Instance: A running n8n instance (self-hosted or cloud). Adzuna API Key: A free Adzuna API key (easily obtainable from their developer portal). OpenRouter Account: For AI model access (costs apply based on usage). Google Sheets Account: To store and track your job applications. Gmail Account: To send automated application emails. --- Customizing this workflow: This workflow is highly customizable. You can: Integrate with other job boards (e.g., LinkedIn, Indeed) using their APIs. Add more sophisticated AI models or custom prompts for even finer control over resume and cover letter generation. Connect to other services for CRM, calendar management, or applicant tracking. Implement different filtering criteria for job postings. Expand the data stored in your Google Sheet (e.g., interview dates, feedback). Start automating your job search today and streamline your path to career success!
Automated book summarization with DeepSeek AI, Qdrant Vector DB & Google Drive
📚 AI Book Summarizer with Vector Search – n8n Automation Overview This n8n workflow automates the process of summarizing uploaded books from Google Drive using vector databases and LLMs. It uses Cohere for embeddings, Qdrant for storage and retrieval, and DeepSeek or your preferred LLM for summarization and Q\&A. Designed for researchers, students, and productivity enthusiasts! Result Example --- Problem 🛠️ ⏳ Reading full books or papers to extract core ideas can take hours. 🧠 Manually summarizing or searching inside long documents is inefficient and overwhelming. --- Solution ✅ Use this workflow to: Upload a book to Google Drive 📥 Auto-split and embed the content into Qdrant 🔍 Summarize it using DeepSeek or another LLM 🤖 Store the final summary back to Google Drive 📤 Clean up the vector store afterward 🧹 --- 🔥 Result ⚡ Instant AI-generated book summary 💡 Ability to perform semantic search and question-answering 📁 Summary saved back to your cloud 🧠 Enhanced productivity for learning and research --- Setup ⚙️ (4–8 minutes) Google Drive Setup 🔗 Connect Google Drive credentials 📁 Create an input folder (e.g., book_uploads) 📁 Create an output folder (e.g., book_summaries) ⚡ Trigger: Use File Created node to monitor book_uploads 📥 Summary will be saved in book_summaries LLM & Embeddings Setup 🔑 Create and test API keys for: DeepSeek/OpenAI for summarization Cohere for embeddings Qdrant for vector storage 🧪 Ensure all credentials are added in n8n --- How It Works 🌟 📂 A file is uploaded to Google Drive ⬇️ File is downloaded 🧱 It's processed, split into chunks, and sent to Qdrant using Cohere embeddings ❓ A Q\&A chain with vector retriever performs information extraction 🧠 A DeepSeek AI Agent analyzes and summarizes the book 📤 The summary is saved to your Drive 🧽 The Qdrant vector collection is deleted (clean-up) --- What’s Included 📦 ✅ Google Drive integration (input/output) ✅ File chunking and embedding using Cohere ✅ Vector storage with Qdrant ✅ Q\&A with vector retrieval ✅ Summarization via DeepSeek or other LLM ✅ Clean-up for minimal storage overhead --- Customization 🎨 You can tailor it to your use case: 🧑🏫 Adjust summarization prompt for study notes or executive summaries 🌍 Add translation node for multilingual support 🔍 Enable long-term memory by skipping vector deletion 📨 Send summaries to Notion, Slack, or Email 🧩 Use other LLM providers (OpenAI, Claude, Gemini, etc.) --- 🌐 Explore more workflows ❤️ Buy more workflows at: adamcrafts 🦾 Custom workflows at: adamcrafts@cloudysoftwares.com adamaicrafts@gmail.com > Build once, customize endlessly, and scale your video content like never before. 🚀
Automated web browsing & extraction with Airtop and AI-prompted queries
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🤖 Autonomous Web Interaction with Airtop (via MCP Trigger) This workflow uses Airtop to perform fully automated web interactions—triggered by an AI agent through the native MCP Server Trigger in n8n. > 💡 Perfect for browser automation, intelligent data extraction, and agent-based workflows. --- ✨ Features ✅ Triggered via native MCP Server (no need for external LangChain services) 🚀 Automates full browser sessions: open window, load page, scroll, click, fill forms 🧠 Supports AI-prompt-based extraction from web content 📷 Captures screenshots and waits for downloads when needed 🧼 Cleans up with session and window termination 🔄 Fully adaptable to agent-based automation flows --- 🧰 Workflow Breakdown Trigger: Native MCP Server Trigger receives instructions Create Session & Window: Starts browser automation in Airtop Load Web Page: Loads target URL Page Interaction: Click elements Scroll inside containers Fill forms with dynamic data Extract Content: Query using prompts Paginated extraction Wait & Capture: Waits for downloadable content Takes a screenshot Cleanup: Closes windows and terminates session --- 📦 Requirements ✅ n8n 1.90+ with MCP Server Trigger ✅ Active Airtop account with API credentials ✅ Installed Airtop Tool node (n8n-nodes-base) 🧠 External agent (optional) to send prompt/data to MCP endpoint --- 🔍 Use Cases 🤖 Agents that browse and extract data from the web 📝 Fill and submit forms from structured data 🔎 Intelligent page querying using prompt-based automation 🧪 Visual testing and screenshot capturing for QA workflows --- 🔧 Nodes Used MCP Server Trigger (native) Airtop Tool(native): Session creation and termination Window control (open, close, screenshot) Interaction (click, scroll, fill) Extraction (query, pagination) Download waiters --- 🧠 AI-Automation Ready This workflow is designed to be controlled by external AI agents or orchestration tools. Combined with prompt-based querying and DOM control, it brings a human-like browsing experience into automated pipelines. --- 🔗 License Open-source under MIT. Commercial usage allowed with attribution. --- Let me know if you'd like to add: 🧪 A test mcp client to validate triggers 🌐 A public link to the deployed workflow 📎 A JSON download for users to import directly
Automate LinkedIn job postings from Recrutei ATS with GPT-4o content generation
Overview: Automated LinkedIn Job Posting with AI This workflow automates the publication of new job vacancies on LinkedIn immediately after they are created in the Recrutei ATS (Applicant Tracking System). It leverages a Code node to pre-process the job data and a powerful AI model (GPT-4o-mini, configured via the OpenAI node) to generate compelling, marketing-ready content. This template is designed for Recruitment and Marketing teams aiming to ensure consistent, timely, and high-quality job postings while saving significant operational time. Workflow Logic & Steps Recrutei Webhook Trigger: The workflow is instantly triggered when a new job vacancy is published in the Recrutei ATS, sending all relevant job data via a webhook. Data Cleaning (Code Node 1): The first Code node standardizes boolean fields (like remote, fixed_remuneration) from 0/1 to descriptive text ('yes'/'no'). Prompt Transformation (Code Node 2): The second, crucial Code node receives the clean job data and: Maps the original data keys (e.g., title, description) to user-friendly labels (e.g., Job Title, Detailed Description). Cleans and sanitizes the HTML description into readable Markdown format. Generates a single, highly structured prompt containing all job details, ready for the AI model. AI Content Generation (OpenAI): The AI Model receives the structured prompt and acts as a 'Marketing Copywriter' to create a compelling, engaging post specifically optimized for the LinkedIn platform. LinkedIn Post: The generated text is automatically posted to the configured LinkedIn profile or Company Page. Internal Logging (Google Sheets): The workflow concludes by logging the event (Job Title, Confirmation Status) into a Google Sheet for internal tracking and auditing. Setup Instructions To implement this workflow successfully, you must configure the following: Credentials: Configure OpenAI (for the Content Generator). Configure LinkedIn (for the Post action). Configure Google Sheets (for the logging). Node Configuration: Set up the Webhook URL in your Recrutei ATS settings. Replace YOURSHEETID_HERE in the Google Sheets Logging node with your sheet's ID. Select the correct LinkedIn profile/company page in the Create a post node.
Automated YouTube publishing from Drive with GPT & Gemini metadata generation
How it works This workflow turns a Google Drive folder into a fully automated YouTube publishing pipeline. Whenever a new video file is added to the folder, the workflow generates all YouTube metadata using AI, uploads the video to your YouTube channel, deletes the original file from Drive, sends a Telegram confirmation, and can optionally post to Instagram and Facebook using permanent system tokens. High-level flow: Detects new video uploads in a specific Google Drive folder. Downloads the file and uses AI to generate: • a polished first-person YouTube description • an SEO-optimized YouTube title • high-ranking YouTube tags Uploads the video to YouTube with the generated metadata. Deletes the original Drive file after upload. Sends a Telegram notification with video details. (Optional) Posts to Instagram & Facebook using permanent system user tokens. Set up steps Setup usually takes a few minutes. Add Google Drive OAuth2 credentials for the trigger and download/delete nodes. Add your OpenAI (or Gemini) API credentials for title/description/tag generation. Add YouTube OAuth2 credentials in the YouTube Upload node. Add Facebook/Instagram Graph API credentials if enabling cross-posting. Replace placeholder IDs (Drive folder ID, Page ID, IG media endpoint). Review sticky notes in the workflow—they contain setup guidance and token info. Activate the Google Drive trigger to start automated uploads.
Altcoin news to LinkedIn posts with DeepSeek AI & Google Sheets
Altcoin News to LinkedIn Posts with DeepSeek AI & Google Sheets Automate your daily crypto content creation and publishing using this n8n workflow. This automation fetches the latest altcoin news from an RSS feed, rewrites it into LinkedIn-style posts using DeepSeek AI, stores it in Google Sheets, and schedules it for publishing—completely hands-free. <h3>🔧 Prerequisites</h3> n8n instance (cloud or self-hosted) DeepSeek AI or OpenAI API key Google account with Sheets API enabled RSS feed URL of your preferred altcoin news source (Optional) LinkedIn API or scheduling tool <h3>⚙️ Workflow Breakdown</h3> Fetch RSS Feed – Retrieve altcoin news articles. Filter Articles – Only select posts from today and containing keywords like "altcoin", "crypto", "defi", etc. Rewrite with DeepSeek – Convert article into a LinkedIn-style post using a prompt template (includes Hook, Body, CTA). Store in Google Sheets – Save the generated content into your sheet. Post to LinkedIn (optional) – Schedule or publish via LinkedIn API or integration. Setup Connect RSS Feed: Set up the CoinDesk RSS feed to be fetched automatically. API Keys: Add your DeepSeek AI API key to enable content generation. Google Sheets: Link your Google Sheet to store both raw RSS data and AI-generated LinkedIn posts. Scheduling: Set the time intervals for when the posts will be scheduled (e.g., every 6 hours). Customization Instructions Change RSS Feed URL: Open the Fetch RSS Feed node. Replace the current URL with your new RSS feed URL in the "URL" field. Change GPT Model: Open the DeepSeek Chat Model node. Find the "Model" field and select your preferred model (e.g., gpt-4). Change Prompt: Open the DeepSeek Chat Model node. Edit the prompt text in the "Prompt" field to customize the tone, content, or keywords. Change Wait Time: Open the Schedule Tweet node. Adjust the "Wait Time" (in seconds) to your desired interval: For 6 hours: 21600 For 1 hour: 3600 For 30 minutes: 1800 For 15 minutes: 900
Live airport delay dashboard with FlightStats, Google Sheets & Slack alerts
Live Airport Delay Dashboard with FlightStats & Team Alerts Description Automates live monitoring of airport delays using FlightStats API. Stores and displays delay data, with Slack alerts for severe delays to operations/sales teams. Essential Information Runs on a scheduled trigger (e.g., hourly or daily). Fetches real-time delay data from FlightStats API. Stores data in Google Sheets and alerts teams via Slack for severe delays. System Architecture Delay Monitoring Pipeline: Set Schedule: Triggers the workflow hourly or daily via Cron. FlightStats API: Retrieves live airport delay data. Data Management Flow: Set Output Data: Prepares data for storage or display. Merge API Data: Combines and processes delay data. Alert and Display: Send Response via Slack: Alerts ops/sales for severe delays. No Action for Minor Delays: Skips minor delays with no action. Implementation Guide Import the workflow JSON into n8n. Configure Cron node for desired schedule (e.g., every 1 hr). Set up FlightStats API credentials and endpoint (e.g., https://api.flightstats.com). Configure Google Sheets or Notion for data storage/display. Test with a sample API call and verify Slack alerts. Adjust delay severity thresholds as needed. Technical Dependencies Cron service for scheduling. FlightStats API for real-time delay data. Google Sheets API or Notion API for data storage/display. Slack API for team notifications. n8n for workflow automation. Database & Sheet Structure Delay Tracking Sheet (e.g., AirportDelays): Columns: airportcode, delaystatus, delayminutes, timestamp, alertsent Example: JFK, Severe, 120, 2025-07-29T20:28:00Z, Yes Customization Possibilities Adjust Cron schedule for different frequencies (e.g., every 30 min). Modify FlightStats API parameters to track specific airports. Customize Slack alert messages in the Send Response via Slack node. Integrate with a dashboard tool (e.g., Google Data Studio) for live display. Add email alerts for additional notification channels.
Build enterprise RAG system with Google Gemini file search & retell AI voice
🧠 Enterprise RAG System with Google Gemini File Search + Retell AI Voice Agent Build a complete enterprise-grade RAG pipeline using Google Gemini’s brand-new File Search API, combined with a powerful Retell AI voice agent (JARVIS) as the conversational front end. This workflow is designed for AI automation agencies, SMBs, enterprise teams, and internal AI copilots. --- 📌 Who Is This For? Enterprise teams building internal search copilots AI automation agencies delivering RAG products to clients SMBs wanting automated knowledge lookup Anyone needing a production-ready, zero-Pinecone RAG workflow --- 🚧 Problem This Solves Traditional RAG requires: Vector DB setup Embedding jobs Chunking pipelines Custom search APIs Gemini File Search eliminates all of this — you simply create a store and upload files. Indexing, chunking, embeddings = fully automated. This workflow turns that into a plug-and-play enterprise template. --- 🧩 What This Workflow Does (High-Level) 1️⃣ Create a Gemini File Search Store Calls fileSearchStores API Creates a persistent embedding store Automatically saved to Google Sheets for future retrieval 2️⃣ Auto-Upload Documents from Google Drive When a new file is added: Download → Start resumable upload → Upload actual bytes Gemini auto-indexes the document for retrieval 3️⃣ Chat-Based Retrieval (Chat Trigger) User question → Gemini File Search → Short, precise answer returned. 4️⃣ Voice Search (Retell AI Agent) Your Gemini RAG can now be searched by voice. --- 🎙️ Retell AI (JARVIS) Voice Agent – Integration Steps 🔧 Step 1 — Paste This Prompt Into Retell AI You are JARVIS, an advanced AI assistant designed to help user with their daily tasks. Always call the user “Sir”. You remember the user's name and important details to improve the experience. Whenever the user asks for information that requires external lookup: Make a short, witty remark related to their request. Immediately call the n8n tool — do NOT repeat the question back. Be concise, professional, and efficient. n8n tool call: Use this tool for all knowledge-based or RAG lookups. It sends the user’s query to the n8n workflow. JSON Schema: { "type": "object", "properties": { "query": { "type": "string", "description": "The user’s full request for JARVIS to process." } }, "required": ["query"] } --- 🔧 Step 2 — Add This URL to Retell (YOUR WEBHOOK) Paste the webhook URL from your Respond to Webhook node: https://YOUR-N8N-URL/webhook/Gemini ← replace with your actual webhook ID This is the endpoint Retell calls every time the user speaks. --- 🔧 Step 3 — End-to-End Flow User speaks to JARVIS Retell sends query → n8n n8n forwards query to Gemini using File Search Gemini returns answer Retell speaks the response out loud You now have a voice-powered enterprise RAG agent. --- 📦 Requirements Google Gemini File Search API access Google Drive folder for document uploads Retell AI agent n8n instance (Optional) Google Sheets for storing store IDs --- 📝 Estimated Setup Time ⏱️ 25–30 minutes (end-to-end) --- 👨💻 Template Author Sandeep Patharkar Founder – FastTrackAI AI Automation Architect | Enterprise Workflow Designer 🔗 Website: https://fasttrackaimastery.com 🔗 LinkedIn: https://www.linkedin.com/in/sandeeppatharkar/ 🔗 Skool Community: https://www.skool.com/aic-plus 🔗 YouTube: https://www.youtube.com/@FastTrackAIMastery --- 🏁 Summary This template gives you a full enterprise RAG infrastructure: Automatic document indexing Gemini File Search retrieval Chat + Voice interfaces Zero-vector-database setup Seamless Retell AI integration Fully production-ready Perfect for creating internal AI copilots, employee knowledge assistants, client-facing search apps, and enterprise RAG systems.