18 templates found
Category:
Author:
Sort:

Post RSS feed items from yesterday to Slack

This workflow will collect the RSS feed data from the previous day and post them to a Slack channel. To use this workflow you will need to add your credentials to the Slack node and select the channel for notifications. You will also need to input the URL for the RSS feed.

JonathanBy Jonathan
4003

Analyze images, videos, documents & audio with Gemini Tools and Qwen LLM Agent

📁 Analyze uploaded images, videos, audio, and documents with specialized tools — powered by a lightweight language-only agent. --- 🧭 What It Does This workflow enables multimodal file analysis using Google Gemini tools connected to a text-only LLM agent. Users can upload images, videos, audio files, or documents via a chat interface. The workflow will: Upload each file to Google Gemini and obtain an accessible URL. Dynamically generate contextual prompts based on the file(s) and user message. Allow the agent to invoke Gemini tools for specific media types as needed. Return a concise, helpful response based on the analysis. --- 🚀 Use Cases Customer support: Let users upload screenshots, documents, or recordings and get helpful insights or summaries. Multimedia QA: Review visual, audio, or video content for correctness or compliance. Educational agents: Interpret content from PDFs, diagrams, or audio recordings on the fly. Low-cost multimodal assistants: Achieve multimodal functionality without relying on large vision-language models. --- 🎯 Why This Architecture Matters Unlike end-to-end multimodal LLMs (like Gemini 1.5 or GPT-4o), this template: Uses a text-only LLM (Qwen 32B via Groq) for reasoning. Delegates media analysis to specialized Gemini tools. ✅ Advantages | Feature | Benefit | | ----------------------- | --------------------------------------------------------------------- | | 🧩 Modular | LLM + Tools are decoupled; can update them independently | | 💸 Cost-Efficient | No need to pay for full multimodal models; only use tools when needed | | 🔧 Tool-based Reasoning | Agent invokes tools on demand, just like OpenAI’s Toolformer setup | | ⚡ Fast | Groq LLMs offer ultra-fast responses with low latency | | 📚 Memory | Includes context buffer for multi-turn chats (15 messages) | --- 🧪 How It Works 🔹 Input via Chat Users submit a message and (optionally) files via the chatTrigger. 🔹 File Handling If no files: prompt is passed directly to the agent. If files are included: Files are split, uploaded to Gemini (to get public URLs). Metadata (name, type, URL) is collected and embedded into the prompt. 🔹 Prompt Construction A new chatInput is dynamically generated: User message Media: [array of file data] 🔹 Agent Reasoning The Langchain Agent receives: The enriched prompt File URLs Memory context (15 turns) Access to 4 Gemini tools: IMG: analyze image VIDEO: analyze video AUDIO: analyze audio DOCUMENT: analyze document The agent autonomously decides whether and how to use tools, then responds with concise output. --- 🧱 Nodes & Services | Category | Node / Tool | Purpose | | --------------- | ---------------------------- | ------------------------------------- | | Chat Input | chatTrigger | User interface with file support | | File Processing | splitOut, splitInBatches | Process each uploaded file | | Upload | googleGemini | Uploads each file to Gemini, gets URL | | Metadata | set, aggregate | Builds structured file info | | AI Agent | Langchain Agent | Receives context + file data | | Tools | googleGeminiTool | Analyze media with Gemini | | LLM | lmChatGroq (Qwen 32B) | Text reasoning, high-speed | | Memory | memoryBufferWindow | Maintains session context | --- ⚙️ Setup Instructions 🔑 Required Credentials Groq API key (for Qwen 32B model) Google Gemini API key (Palm / Gemini 1.5 tools) 🧩 Nodes That Need Setup Replace existing credentials on: Upload a file Each GeminiTool (IMG, VIDEO, AUDIO, DOCUMENT) lmChatGroq ⚠️ File Size & Format Considerations Some Gemini tools have file size or format restrictions. You may add validation nodes before uploading if needed. --- 🛠️ Optional Improvements Add logging and error handling (e.g., for upload failures). Add MIME-type filtering to choose the right tool explicitly. Extend to include OCR or transcription services pre-analysis. Integrate with Slack, Telegram, or WhatsApp for chat delivery. --- 🧪 Example Use Case > "Hola, ¿qué dice este PDF?" Uploads a document → Agent routes it to Gemini DOCUMENT tool → Receives extracted content → LLM summarizes it in Spanish. --- 🧰 Tags multimodal, agent, langchain, groq, gemini, image analysis, audio analysis, document parsing, video analysis, file uploader, chat assistant, LLM tools, memory, AI tools --- 📂 Files This template is ready to use as-is in n8n. No external webhooks or integrations required.

Mauricio PereraBy Mauricio Perera
3661

Host your own JWT authentication system with Data Tables and token management

Description A production-ready authentication workflow implementing secure user registration, login, token verification, and refresh token mechanisms. Perfect for adding authentication to any application without needing a separate auth service. Get started with n8n now! What it does This template provides a complete authentication backend using n8n workflows and Data Tables: User Registration: Creates accounts with secure password hashing (SHA-512 + unique salts) Login System: Generates access tokens (15 min) and refresh tokens (7 days) using JWT Token Verification: Validates access tokens for protected endpoints Token Refresh: Issues new access tokens without requiring re-login Security Features: HMAC-SHA256 signatures, hashed refresh tokens in database, protection against rainbow table attacks Why use this template No external services: Everything runs in n8n - no Auth0, Firebase, or third-party dependencies Production-ready security: Industry-standard JWT implementation with proper token lifecycle management Easy integration: Simple REST API endpoints that work with any frontend framework Fully customizable: Adjust token lifespans, add custom user fields, implement your own business logic Well-documented: Extensive inline notes explain every security decision and implementation detail How to set up Prerequisites n8n instance (cloud or self-hosted) n8n Data Tables feature enabled Setup Steps Create Data Tables: users table: id, email, username, passwordhash, refreshtoken refreshtokens table: id, userid, tokenhash, expiresat Generate Secret Keys: Run this command to generate a random secret: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))" Generate two different secrets for ACCESSSECRET and REFRESHSECRET Configure Secrets: Update the three "SET ACCESS AND REFRESH SECRET" nodes with your generated keys Or migrate to n8n Variables for better security (instructions in workflow notes) Connect Data Tables: Open each Data Table node Select your created tables from the dropdown Activate Workflow: Save and activate the workflow Note your webhook URLs API Endpoints Register: POST /webhook/register-user Request body: { "email": "user@example.com", "username": "username", "password": "password123" } Login: POST /webhook/login Request body: { "email": "user@example.com", "password": "password123" } Returns: { "accessToken": "...", "refreshToken": "...", "user": {...} } Verify Token: POST /webhook/verify-token Request body: { "accesstoken": "youraccess_token" } Refresh: POST /webhook/refresh Request body: { "refreshtoken": "yourrefresh_token" } Frontend Integration Example (Vue.js/React) Login flow: const response = await fetch('https://your-n8n.app/webhook/login', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ email, password }) }); const { accessToken, refreshToken } = await response.json(); localStorage.setItem('accessToken', accessToken); Make authenticated requests: const data = await fetch('https://your-api.com/protected', { headers: { 'Authorization': Bearer ${accessToken} } }); Key Features Secure Password Storage: Never stores plain text passwords; uses SHA-512 with unique salts Two-Token System: Short-lived access tokens (security) + long-lived refresh tokens (convenience) Database Token Revocation: Refresh tokens can be revoked for logout-all-devices functionality Duplicate Prevention: Checks username and email availability before account creation Error Handling: Generic error messages prevent information leakage Extensive Documentation: 30+ sticky notes explain every security decision Use Cases SaaS applications needing user authentication Mobile app backends Internal tools requiring access control MVP/prototype authentication without third-party costs Learning JWT and auth system architecture Customization Token Lifespan: Modify expiration times in "Create JWT Payload" nodes User Fields: Add custom fields to registration and user profile Password Rules: Update validation in "Validate Registration Request" node Token Rotation: Implement refresh token rotation for enhanced security (notes included) Security Notes :warning: Important: Change the default secret keys before production use Use HTTPS for all webhook endpoints Store secrets in n8n Variables (not hardcoded) Regularly rotate secret keys in production Consider rate limiting for login endpoints Support & Documentation The workflow includes comprehensive documentation: Complete authentication flow overview Security explanations for every decision Troubleshooting guide Setup instructions FAQ section with common issues Perfect for developers who want full control over their authentication system without the complexity of managing separate auth infrastructure. Get Started with n8n now! Tags: authentication, jwt, login, security, user-management, tokens, password-hashing, api, backend

Luka ZivkovicBy Luka Zivkovic
3421

Website lead management: Send contact form submissions to WhatsApp & Google Sheets

Who's it for Business owners, marketers, and web developers who want to instantly respond to website contact form submissions and maintain organized lead records without manual monitoring. What it does This workflow automatically processes contact form submissions from your website, sending immediate WhatsApp notifications with formatted lead details while simultaneously logging all data to Google Sheets for organized lead management and follow-up tracking. How it works When someone submits your website contact form, the webhook instantly receives the data, formats it into a professional WhatsApp message with emojis and structure, sends the notification to your phone, and logs all details (name, email, phone, service, message, timestamp) to a Google Sheets database for permanent storage and analysis. Requirements WhatsApp Business API credentials Google Sheets API access with a spreadsheet containing these columns: date (for timestamp) name (contact's full name) email (contact's email address) phone (contact's phone number) service (requested service/interest) message (contact's message/inquiry) Website contact form that can POST to webhook URL with fields: name, email, phone, service, message n8n instance (self-hosted or cloud) Google Sheets Setup Create a new Google Sheet with the following column headers in row 1: Column A: date Column B: name Column C: email Column D: phone Column E: service Column F: message The workflow will automatically populate these columns with each form submission and use the date column for duplicate checking. How to set up Credentials Setup: Configure WhatsApp Business API credentials in the WhatsApp node Set up Google Sheets API connection and grant necessary permissions Configuration: Update the recipient phone number in the WhatsApp node (format: +1234567890) Replace the Google Sheets document ID with your spreadsheet ID Ensure your Google Sheet has the required column structure mentioned above Integration: Copy the webhook URL from the Contact Form Trigger node Configure your website form to POST data to this endpoint with field names: name, email, phone, service, message Testing: Test the workflow by submitting a sample form entry Verify WhatsApp notification is received and data appears in Google Sheets How to customize the workflow Message Format: Modify the WhatsApp message template in the Format Lead Data node Additional Fields: Add more form fields by updating both the Code node and Google Sheets mapping Email Notifications: Include email alerts by adding an Email node after the Format Lead Data node Conditional Logic: Set up different notifications for high-priority services or VIP customers Data Validation: Add filtering rules in the Code node to handle spam or invalid submissions Multiple Recipients: Configure the WhatsApp node to send alerts to multiple team members

Roshan RamaniBy Roshan Ramani
1307

Build a comprehensive multimodal assistant on Telegram with OpenAI, SERP and Vector Store

J.A.R.V.I.S. Multimodal AI assistant on Telegram with OpenAI This workflow transforms your Telegram bot into J.A.R.V.I.S., a powerful, multimodal AI assistant. It can understand and process text, voice messages, images, and documents. The assistant can search the web, scrape websites, generate images, perform calculations, and reference uploaded documents to provide comprehensive and context-aware responses in either text or audio format. 🧑‍💻 Who’s it for This workflow is for developers, AI enthusiasts, and businesses who want to create an advanced, interactive AI assistant on Telegram. It’s perfect for automating customer support, creating a personal AI helper, or exploring the capabilities of multimodal large language models (LLMs) in a practical application. ⚙️ How it works The workflow begins when a message is received by your Telegram bot. A Switch node then directs the data based on the message type: Text: The message is formatted and sent directly to the main AI agent. Voice: The audio file is downloaded from Telegram and transcribed into text using the OpenAI API. Image: The image is downloaded and analyzed by an OpenAI vision model to understand its content. Document: The file is downloaded and its content is stored in a temporary vector store, making it searchable for the AI. The processed input is then passed to the core "J.A.R.V.I.S." Agent node. This agent uses an OpenAI model, conversational memory, and a suite of tools (Google Search, Web Scraper, Image Generator, Calculator, and the document vector store) to formulate a response. Finally, the workflow checks if the initial message was a voice note; if so, it generates an audio response. Otherwise, it sends the answer as a text message back to the user. 🛠️ How to set up Telegram: Create a Telegram Bot - Use @BotFather to create a bot and obtain your bot token; Add Telegram API credentials in n8n with your bot token to the Receive Message Trigger node and all other Telegram nodes. In the Receive Message node, enter the chatId of the user or group authorized to interact with the bot. OpenAI: Add your OpenAI API credentials to all OpenAI, AI Agent, and AI tool nodes. SerpAPI: Add your SerpAPI credentials to the Basic Google Search node to enable web search functionality. Jina AI: Add your Jina AI API key to the Setup Node - The API Key is used on the Webpage Scraper node. ✅ Requirements Telegram Bot API credentials and Bot token. OpenAI API credentials. SerpAPI API credentials. Jina.ai API credentials 🎨 How to customize the workflow Change the AI model: You can select a different OpenAI model in the OpenAI Chat Model node (e.g., switch from gpt-4.1 to gpt-4o) or in the Analyze Image and Transcribe nodes. Modify the AI's personality: Edit the system prompt in the J.A.R.V.I.S. Agent node to change its name, tone, instructions, or default language. Expand its tools: Connect more tools to the J.A.R.V.I.S. Agent node to extend its capabilities, such as connecting to a database or another third-party API. Adjust the response format: Modify the If Audio Response node to change the conditions for sending text or audio messages. For example, you could configure it to always respond with text. 💬 Need Help? Join the Discord or ask in the Forum

FabioInTechBy FabioInTech
1125

Automate web research & analysis with Oxylabs & GPT for comprehensive reports

Fully automate deep research from start to finish: scrape Google Search results, select relevant sources, scrape & analyze each source in parallel, and generate a comprehensive research report. Who is this for? This workflow is for anyone who needs to research topics quickly and thoroughly: content creators, marketers, product managers, researchers, journalists, students, or anyone seeking deep insights without spending hours browsing websites. If you find yourself opening dozens of browser tabs to piece together information, this template will automate that entire process and deliver comprehensive reports in minutes. How it works Submit your research questions through n8n's chat interface (include as much context as you need) AI generates strategic search queries to explore different angles of your topic (customize the number of queries as needed) Oxylabs scrapes Google Search results for each query (up to 50 results per query) AI evaluates and selects sources that are the most relevant and authoritative Content extraction runs in parallel as Oxylabs scrapes each source and AI extracts key insights Summaries are collected in n8n's data table for final processing AI synthesizes everything into a comprehensive research report with actionable insights See the complete step-by-step tutorial on the n8n blog. Requirements Oxylabs AI Studio API key – Get a free API key with 1000 credits OpenAI API key (or use alternatives like Claude, Gemini, and local Ollama LLMs) Setup Install Oxylabs AI Studio as shown on this page Set your API keys: Oxylabs AI Studio OpenAI Create a data table Select the table name in each data table node Create a sub-workflow: Select the 3 nodes (Scrape content, Summarize content, Insert row) Right-click Select “Convert 3 nodes to sub-workflow” Edit the sub-workflow settings for for parallel execution: Mode: Run once for each item Options → Add Option → disable “Wait For Sub-Workflow Completion” Once you finish all these setup steps, you can run the workflow through n8n's chat interface. For example, send the following message: I'm planning to build a wooden summer house and would appreciate guidance on the process. What are the key considerations I should keep in mind from planning through completion? I'm particularly interested in the recommended construction steps and which materials will ensure long-term durability and quality. Customize this workflow for your needs Feel free to modify the workflow to fit the scale and final output your project requires: To reuse this workflow, clear the data table after the final analysis by adding a Data table node with the Delete row(s) action Scale up by processing more search queries, increasing results per query beyond 10, and selecting additional relevant URLs Enable JavaScript rendering in Oxylabs AI Studio (Scraper) node to ensure all content is gathered Adjust the system prompts in LLM nodes to fit your specific research goals Explore other AI Studio apps like Browser Agent for interactive browser control or Crawler for mapping entire websites Connect other nodes like Google Sheets, Notion, Airtable, or webhooks to route results where you need them

VytenisBy Vytenis
947

Check which AI models are used in your workflows

How it works Fetch all workflows from your n8n instance. Filter workflows that contain nodes with a modelId setting. Extract the node names, model IDs, model names, workflow names, and workflow URLs. Save the extracted information into a connected Google Sheet. Set up steps Connect your n8n API credentials. Connect your Google Sheets account. Replace "Your n8n domain" with your actual domain URL. Use this Google Sheet template to create a new sheet for results. Setup typically takes 5 minutes. Be cautious: if you have over 100 workflows, performance may be impacted. Notes Sticky notes inside the workflow provide extra guidance. This workflow clears old sheet data before writing new results. Make sure your n8n instance allows API access. Result Example Update: It didn't detect the AI model in tool originally. Now it's fixed! Update 20250429: Support 1.91.0 with open node directly! Optimize the url with node id.

darrell_twBy darrell_tw
839

Convert n8n tags into folders and move workflows

N8n recently introduced folders and it has been a big improvement on workflow management on top of the tags. This means the current workflows need to be moved manually to the folders. The simplest idea to try is to convert the current tags into folders and move all the current workflows within the respective tags into the folders This assumes the tag name will be used as the folder name. To Note For workflows that use more than 1 tag, the workflow will be assigned the last tag that runs as the folder. How does it work I took the liberty of simplifying the setup of this workflow that will be needed on your part and also be beginner-friendly Copy and paste this workflow into your n8n canvas. You must have existing workflows and tags before you can run this Set your n8n login details on the node set Credentials with the n8n URL, username, and password. Setup your n8n API credentials on the n8n node get workflows Run the workflow. This opens up a form where you can select the number of tags to move and click on submit The workflow responds with the successful number of workflows that were imported Read more about the template Built by Zacharia Kimotho - Imperol

Zacharia KimothoBy Zacharia Kimotho
685

Website SEO health analytics with Google Sheets, PDF reports & Gmail alerts

Automated SEO Health Monitoring & Reporting This workflow automatically monitors the SEO health of websites stored in a Google Sheet. It fetches each website’s HTML, analyzes key SEO metrics (title, meta description, H1 count, canonical, robots, performance score, etc.) and updates results back into Google Sheets. If performance is poor (<50), it sends an alert email. For healthy sites, it generates a detailed PDF report and stores it in Google Drive. Who’s it for Digital marketing teams SEO agencies Website administrators who want automated SEO health checks Businesses with multiple websites or landing pages to monitor How it works Daily Trigger – Runs every day at 9 AM. Fetch Website List – Reads website URLs from Google Sheets. Crawl Websites – Uses HTTP requests to fetch each website’s HTML. SEO Analysis – Extracts SEO-related metadata (title, meta description, H1, etc.). Health Check – Scores SEO performance based on predefined rules. Decision Node – If score < 50 → Send alert email; else → Generate full SEO report. Update Logs – Logs results back into Google Sheets. Generate PDF Reports – Converts HTML reports into PDFs via PDF.co API. Save to Drive – Stores the PDF reports in Google Drive for long-term access. How to set up Open n8n and import the workflow. Configure your Google Sheets credentials and specify the sheet containing your website URLs. Add your Gmail account to allow automated alert emails. Set up your Google Drive credentials for storing PDF reports. Obtain an API key from PDF.co and configure the HTTP Request node. Adjust the Schedule Trigger to the time that works best (default: 9 AM daily). Test the workflow with a sample domain list. Requirements n8n instance (self-hosted or cloud) Google Sheets account (to store website URLs and logs) Gmail account (for sending alerts) Google Drive account (to save SEO reports) PDF.co API Key (for HTML → PDF conversion) How to customize Change performance threshold: Modify the IF node condition (default <50). Custom SEO rules: Edit the “SEO Health Check” Function node to add/remove checks (e.g., missing schema tags, page load times). Different output storage: Replace Google Drive with Dropbox, S3 or OneDrive. Alternate notification channels: Swap Gmail with Slack, Microsoft Teams or Telegram. Add-ons Send Slack/Teams notifications for low scores. Add PageSpeed Insights API for performance scoring. Generate weekly summary reports per domain. Integrate with Notion/Confluence to log SEO health history. Use Case Examples An SEO agency monitors 100+ client websites daily and sends alerts when a site has poor SEO signals. A company’s marketing manager gets a daily SEO health PDF report stored in Drive. A SaaS product team automatically logs performance changes for each release. Common Troubleshooting | Issue | Possible Cause | Solution | | ------------------------------------ | ------------------------------------------------ | --------------------------------------------------------------------------- | | Workflow fails at HTTP Crawl | Website blocks requests / timeout | Increase timeout in Set Config node or add retry logic. | | Always returns https://example.com | Missing canonical / OG tags in HTML | Enhanced code now infers from JSON-LD or domain detection. Update analyzer. | | PDF not generated | Invalid API key or wrong endpoint in PDF.co node | Verify PDF.co API key and endpoint URL. | | Email not sending | Gmail credentials not set or blocked | Reconnect Gmail in n8n credentials manager. | | Google Sheet not updating | Wrong column mapping in Update Sheet node | Check node mapping: domain column vs performance/date columns. | | Google Drive upload fails | Missing folder permissions | Ensure correct Drive folder ID and credentials. | Need Help? If you’d like assistance setting up, customizing or scaling this workflow for your use case, our n8n automation team at WeblineIndia can help you: Tailor SEO rules for your industry. Connect to additional APIs (Ahrefs, Semrush, PageSpeed). Automate weekly/monthly reporting with summary dashboards.

WeblineIndiaBy WeblineIndia
547

Process Thai documents with TyphoonOCR & AI to Google Sheets (multi-page PDF)

⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package, pdfseparate from poppler-utils, and custom command execution. Make sure to install all required dependencies locally. --- Who is this for? This template is designed for developers, back-office teams, and automation builders (especially in Thailand or Thai-speaking environments) who need to process multi-file, multi-page Thai PDFs and automatically export structured results to Google Sheets. It is ideal for: Government and enterprise document processing Thai-language invoices, memos, and official letters AI-powered automation pipelines that require Thai OCR --- What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text, but integrating it into an end-to-end workflow usually requires manual scripting and handling multi-page PDFs. This template solves that by: Splitting PDFs into individual pages Running Typhoon OCR on each page Aggregating text back into a single file Using AI to extract structured fields Automatically saving structured data into Google Sheets --- What this workflow does Trigger: Manual execution or any n8n trigger node Load Files: Read PDFs from a local doc/multipage folder Split PDF Pages: Use pdfinfo and pdfseparate to break PDFs into pages Typhoon OCR: Run OCR on each page via Execute Command Aggregate: Combine per-page OCR text LLM Extraction: Use AI (e.g., GPT-4, OpenRouter) to extract fields into JSON Parse JSON: Convert structured JSON into a tabular format Google Sheets: Append one row per file into a Google Sheet Cleanup: Delete temp split pages and move processed PDFs into a Completed folder --- Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr poppler-utils: provides pdfinfo, pdfseparate qpdf: backup page counting Create folders /doc/multipage for incoming files /doc/tmp for split pages /doc/multipage/Completed for processed files Google Sheet Create a Google Sheet with column headers like: bookid | date | subject | to | attach | detail | signedby | signedby2 | contactphone | contactemail | contactfax | download_url API Keys Export your TYPHOONOCRAPIKEY and OPENAIAPI_KEY (or use credentials in n8n) --- How to customize this workflow Replace the LLM provider in the “Structure Text to JSON with LLM” node (supports OpenRouter, OpenAI, etc.) Adjust the JSON schema and parsing logic to match your documents Update Google Sheets mapping to fit your desired fields Add trigger nodes (Dropbox, Google Drive, Webhook) to automate file ingestion --- About Typhoon OCR Typhoon is a multilingual LLM and NLP toolkit optimized for Thai. It includes typhoon-ocr, a Python OCR package designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multi-language documents in Southeast Asia. --- Deployment Option You can also deploy this workflow easily using the Docker image provided in my GitHub repository: https://github.com/Jaruphat/n8n-ffmpeg-typhoon-ollama This Docker setup already includes n8n, ffmpeg, Typhoon OCR, and Ollama combined together, so you can run the whole environment without installing each dependency manually.

Jaruphat J.By Jaruphat J.
489

Track an event in Segment

No description available.

tanaypantBy tanaypant
451

Automate employee reimbursement workflow with Gmail, Google Drive & AI validation

Reimbursements used to be a headache. Employees submitted receipts through emails, managers got stuck in approval chains, and finance teams spent hours checking for duplicates, updating sheets, and sending follow-up emails. So, we automated it. Using n8n, we built a smart Employee Reimbursement Workflow that does everything… in just a few clicks. Here’s how it works.] When an employee uploads a receipt, the workflow first checks for duplicates. If the file is new, it’s uploaded to Google Drive instantly. Next, a unique tracking ID is generated—no manual typing, no mistakes. Then, all the details are logged in Google Sheets in real time, ready for records. And finally, the Finance team gets an email notification with everything they need to process the payment—no chasing, no missing info. The impact? We’ve cut processing time by over 70%, reduced errors to nearly zero, and made the entire process stress-free for employees and finance alike. This isn’t just automation—it’s giving people their time back.

Pramod RathoureBy Pramod Rathoure
411