Learn n8n basics in 3 easy steps ✨
New to n8n? This simple tutorial is the perfect way to get started. In just a few minutes, you’ll build your first automation that runs on a schedule, fetches fresh data from the internet and delivers it straight to your inbox. What you’ll do Run the workflow to grab a random inspirational quote. See how data flows through each node as it moves from an API call to processing results. Connect Gmail and send the quote directly to your email. What you’ll learn How to trigger workflows manually and on a schedule ⏰ How to connect to external APIs and fetch data 🌐 How to use the Set node to structure and map data 🔧 How to connect Gmail to send the data 📩 Why it matters This workflow shows you the n8n basics step by step - no code required. By the end, you’ll know how to build, test, and share automations that run on their own, giving you the confidence to explore more advanced use cases. 🚀
Gmail AI Auto-Responder: Create Draft Replies to incoming emails
This workflow automatically generates draft replies in Gmail. It's designed for anyone who manages a high volume of emails or often face writer's block when crafting responses. Since it doesn't send the generated message directly, you're still in charge of editing and approving emails before they go out. How It Works: Email Trigger: activates when new emails reach the Gmail inbox Assessment: uses OpenAI gpt-4o and a JSON parser to determine if a response is necessary. Reply Generation: crafts a reply with OpenAI GPT-4 Turbo Draft Integration: after converting the text to html, it places the draft into the Gmail thread as a reply to the first message Set Up Overview (~10 minutes): OAuth Configuration (follow n8n instructions here): Setup Google OAuth in Google Cloud console. Make sure to add Gmail API with the modify scope. Add Google OAuth credentials in n8n. Make sure to add the n8n redirect URI to the Google Cloud Console consent screen settings. OpenAI Configuration: add OpenAI API Key in the credentials Tweaking the prompt: edit the system prompt in the "Generate email reply" node to suit your needs Detailed Walkthrough Check out this blog post where I go into more details on how I built this workflow. Reach out to me here if you need help building automations for your business.
Generate Text-to-Speech Using Elevenlabs via API
🎉 Do you want to master AI automation, so you can save time and build cool stuff? I’ve created a welcoming Skool community for non-technical yet resourceful learners. 👉🏻 Join the AI Atelier 👈🏻 --- This workflow provides an API endpoint to generate speech from text using Elevenlabs.io, a popular text-to-speech service. Step 1: Configure Custom Credentials in n8n To set up your credentials in n8n, create a new custom authentication entry with the following JSON structure: json { "headers": { "xi-api-key": "your-elevenlabs-api-key" } } Replace "your-elevenlabs-api-key" with your actual Elevenlabs API key. Step 2: Send a POST Request to the Webhook Send a POST request to the workflow's webhook endpoint with these two parameters: voice_id: The ID of the voice from Elevenlabs that you want to use. text: The text you want to convert to speech. This workflow has been a significant time-saver in my video production tasks. I hope it proves just as useful to you! Happy automating! The n8Ninja
Building RAG chatbot for movie recommendations with Qdrant and Open AI
Create a recommendation tool without hallucinations based on RAG with the Qdrant Vector database. This example is based on movie recommendations on the IMDB-top1000 dataset. You can provide your wishes and your "big no's" to the chatbot, for example: "A movie about wizards but not Harry Potter", and get top-3 recommendations. How it works a video with the full design process Upload IMDB-1000 dataset to Qdrant Vector Store, embedding movie descriptions with OpenAI; Set up an AI agent with a chat. This agent will call a workflow tool to get movie recommendations based on a request written in the chat; Create a workflow which calls Qdrant's Recommendation API to retrieve top-3 recommendations of movies based on your positive and negative examples. Set Up Steps You'll need to create a free tier Qdrant Cluster (Qdrant can also be used locally; it's open-sourced) and set up API credentials You'll OpenAI credentials You'll need GitHub credentials & to upload the IMDB Kaggle dataset to your GitHub.
Extract text from PDF and image using Vertex AI (Gemini) into CSV
Case Study I'm too lazy to record every transaction for my expense tracking. Since all my expenses are digital, I just extract the transactions from bank PDF statements and screenshots into CSV to import into my budgeting software. Click here to watch Youtube tutorial What this workflow does Upload your PDF or screenshots into Google Drive It then passes the PDF/image to Vertex Gemini to do some A.I. image recognition It then sends the transactions as CSV and stores it into another Google Drive folder Setup Set up 2 google drive folders. 1 for uploading and 1 for the output. Input your Google Drive crendtials Input your Vertex Gemini credentials How to adjust it to your needs You can upload other types of documents for information extraction. You can extract any text data from any image or PDF You can adjust the A.I. prompt to do different things
Invoice processor & validator with OCR, AI & Google Sheets
📝 Say goodbye to manual invoice checking! This smart workflow automates your entire invoice processing pipeline using AI, OCR, and Google Sheets. --- ⚙️ What This Workflow Does: 📥 1. Reads an invoice PDF — Select a local PDF invoice from your machine. 🔍 2. Extracts raw text using OCR — Converts scanned or digital PDFs into readable text. 🧠 3. AI Agent processes the text — Transforms messy raw text into clean JSON using natural language understanding. 🧱 4. Structures and refines the JSON — Converts AI output into a structured, usable format. 🔄 5. Splits item-wise data — Extracts individual invoice line items with all details. 🆔 6. Generates unique keys — Creates a unique identifier for each item for tracking. 📊 7. Updates Google Sheet — Adds extracted items to your designated sheet automatically. 📂 8. Fetches master item data — Loads your internal product master to validate against. ✅ 9. Validates item name & cost — Compares extracted items with your official records to verify accuracy. 📌 10. Updates results per item — Marks each item as Valid or Invalid in the sheet based on matching. --- 💼 Use Case: Perfect for businesses, freelancers, or operations teams who receive invoices and want to automate validation, detect billing errors, and log everything seamlessly in Google Sheets — all using the power of AI + n8n. > 🔁 Fast. Accurate. Zero manual work. --- OCR AI Invoices Automation.
Automate lead qualification with RetellAI Phone Agent, OpenAI GPT & Google Sheet
👉 Build a Phone Agent to qualify outbound leads and schedule inbound calls Who is this for? This workflow is designed for sales teams, call centers, and businesses handling both outbound and inbound lead calls who want to automate their qualification, follow-up, and call documentation process without manual intervention. It’s ideal for teams using Google Sheets, RetellAI, OpenAI, and Gmail as part of their tech stack. --- Real-World Use Cases 🛍 E-commerce – Instantly handle product FAQs and order status checks, 24/7. 🏬 Retail Stores – Share store hours, directions, and return policies without lifting a finger. 🍽 Restaurants – Take reservations or answer menu questions automatically. 💼 Service Providers – Book appointments or consultations while you focus on your craft. 📞 Any Local Business – Deliver friendly, consistent phone support — no live agent required. --- What problem is this workflow solving? Managing lead calls at scale can be chaotic—between scheduling outbound qualification calls, handling inbound appointment requests, and making sure every call is documented and followed up. This workflow automates the entire process, reducing human error and saving time by: ✅ Sending reminders to reps for outbound calls ✅ Automatically placing calls with RetellAI ✅ Handling inbound calls and checking caller details ✅ Generating and emailing call summaries automatically --- What this workflow does This n8n template connects Google Sheets, RetellAI, OpenAI, and Gmail into a seamless workflow: Outbound Lead Qualification Workflow Triggers when a new lead is added to Google Sheets Sends an SMS notification to remind the rep to call in 5 minutes (Optional) Waits 5 minutes Initiates an automated call to the lead via RetellAI Inbound Call Appointment Scheduler Receives inbound calls from RetellAI (via webhook) Checks if the caller’s number exists in Google Sheets Responds to RetellAI with a success or error message Post-Call Workflow Receives post-call data from RetellAI Filters only analyzed calls Updates the lead’s record in Google Sheets Uses OpenAI to generate a call summary Emails the summary to a team inbox or rep --- Setup ✅ You need an active RetellAI API key Sign up for RetellAI, create an agent, and set the webhook URLs (n8n_call for call events). Purchase a Twilio phone number and link it to the agent. ✅ Your Google Sheet must have a column for phone numbers (e.g., "Phone") ✅ Gmail account connected and authorized in n8n ✅ OpenAI API key added to your environment variables or credentials Configure your Google Sheets node with the correct spreadsheet ID and range Add your RetellAI API key to the HTTP request nodes Connect your Gmail account in the Gmail node Add your OpenAI key in the OpenAI node 👉 See full setup guide here: Notion Documentation --- How to customize this workflow to your needs Change SMS content: Edit the text in the “Send SMS reminder” node to match your team’s tone Modify call wait time: Enable and adjust the “Wait 5 minutes” node to any delay you prefer Add CRM integration: Replace or extend the Google Sheets node to update your CRM instead of a spreadsheet Customize call summary prompts: Edit the prompt sent to OpenAI to change the summary style or add extra insights Send email to different recipients: Change the recipient address in the Gmail node or make it dynamic from the lead record --- Need help customizing? Contact me for consulting and support : Linkedin
Generate audio from text using OpenAI and Webhook | Text to speech workflow
Who is this for? This workflow is ideal for developers, content creators, or customer support teams looking to automate text-to-speech conversion using OpenAI. What problem does this solve? It automates the process of converting text inputs into speech, reducing manual effort and enhancing productivity. What this workflow does: This workflow triggers when a text input is received via a webhook, converts it into audio using the OpenAI API, and sends the generated speech back through a webhook response. Setup: Ensure you have an OpenAI API key (you can get it from OpenAI website). Set up the webhook URL and parameters. Configure the OpenAI node with your API key (Create New Credentials). set up the responde to webhook node.
Convert image to text using GROQ LLaVA V1.5 7B
What this template does This template uses GROQ LLAVA V1.5 7B API that offers fast inference for multimodal models with vision capabilities for understanding and interpreting visual data from images. . The users send a image and get a description of the image from the model. Setup Open the Telegram app and search for the BotFather user (@BotFather) Start a chat with the BotFather Type /newbot to create a new bot Follow the prompts to name your bot and get a unique API token Save your access token and username Once you set your bot, you can send the image, and get the descriptions.
Turn on a light to a specific color on any update in GitHub repository
This workflow turns a light red when an update is made to a GitHub repository. By default, updates include pull requests, issues, pushes just to name a few. Prerequisites GitHub credentials. Home Assistant credentials. How it works Triggers off on the On any update in repository node. Uses Home Assistant to turn on a light and then configure the light to turn red.
Push JSON data into an app or to spreadsheet file
This workflow template shows how to load JSON data into a workflow and push that data into an App or convert it into a Spreadsheet file. Specifically, this workflow shows how to make a generic API request that returns JSON. It then shows how to load that into a Google Sheets spreadsheet, or convert it to .CSV file format. However, you can use the general pattern to load data into any app or convert to any spreadsheet file format (such as .xlsx).
Process large documents with OCR using SubworkflowAI and Gemini
Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation: https://docs.subworkflow.ai/category/api-reference Join the discord: https://discord.gg/RCHeCPJnYw