19 templates found
Category:
Author:
Sort:

Download a file from Google Drive

Companion workflow for Google Drive node docs

amudhanBy amudhan
20472

AI orchestrator: dynamically selects models based on input type

This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. --- Benefits Smart Model Routing: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability: Easily expandable by adding more request types or LLMs. Maintainability: Clear logic separation between classification, model routing, and execution. Personalization: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. --- How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. search: For queries requiring search tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. --- Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, search). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
10236

Set credentials dynamically using expressions

How it works This workflow shows how to set credentials dynamically using expressions. It accepts an API key via a form, then uses it in the NASA node to authenticate a request. Setup steps First, set up your NASA credential: Create a new NASA credential. Hover over API Key. Toggle Expression on. In the API Key field, enter {{ $json["Enter your NASA API key"] }}. Then, test the workflow: Get an API key from NASA Select Test workflow Enter your key using the form. The workflow runs and sends you to the NASA picture of the day. For more information on expressions, refer to n8n documentation | Expressions.

DeborahBy Deborah
5198

Automate outbound sales calls to qualified leads with VAPI.ai and Google Sheets

This workflow automates outbound calls to qualified leads using VAPI.ai and Google Sheets. Here's how it works and how to set it up. How It Works Read Leads: The workflow starts by reading leads from a Google Sheet where the "AI call status" is marked as "NO" Batch Processing: Leads are processed one at a time (batch size = 1) to ensure proper sequencing Variable Setup: Extracts the phone number and row number from each lead record Trigger VAPI Call: Makes an API call to VAPI.ai to initiate an AI-powered outbound call Update Status: Marks the lead as "YES" in the Google Sheet after the call is triggered to prevent duplicate calls Detailed Setup Guide Prerequisites n8n instance (self-hosted or cloud) Google Sheets account with OAuth2 credentials VAPI.ai account with API access Step 1: Google Sheets Setup Create a Google Sheet with your leads data Ensure you have these columns (adjust if needed): Phone number (column E in the current setup) AI call status (column F in the current setup) Mark all leads you want to call with "NO" in the status column Step 2: Google Sheets Credentials In n8n, go to Credentials > Add New Select "Google Sheets OAuth2 API" Follow the prompts to authenticate with your Google account Name it (e.g., "Google Sheets account 3" as in the example) Step 3: VAPI.ai Setup Get your VAPI.ai API credentials In n8n, go to Credentials > Add New Select "HTTP Header Auth" Add your VAPI authorization header (typically "Bearer YOURAPIKEY") Name it (e.g., "Header Auth account 4" as in the example)

David OlusolaBy David Olusola
1806

Write all Linear tickets to Google Sheets

Use Case Track all Linear tickets in Google sheets. Useful if you want to do some custom analysis but don't want to pay for Linear's Plus features (Linear Insights) or that it does not cover. Setup Add Linear API header key Add Google sheets creds Update which teams to get tickets from in Graphql Nodes Update which Google Sheets page to write all the tickets to You only need to add one column, id, in the sheet. Google Sheets node in automatic mapping mode will handle adding the rest of the columns. Set any custom data on each ticket Activate workflow 🚀 How to adjust this template Set any custom fields you want to get out of this, that you can quickly do in n8n.

MutasemBy Mutasem
1282

Automated server log cleanup via email alerts with SSH - Nginx, Docker, System

This n8n workflow monitors email alerts for disk utilization exceeding 80%, extracts the server IP, logs into the server, and purges logs from Nginx, PM2, Docker, and system files to clear disk space. Key Insights Ensure email alerts are consistently formatted with server IP details. SSH access must be properly configured to avoid authentication failures. Workflow Process Initiate the workflow with the Check Disk Alert Emails node when an email triggers on high disk usage. Parse the email to extract the server IP using the Extract Server IP from Email node. Set up SSH credentials and paths manually with the Prepare SSH Variables node. Execute cleanup commands to delete logs from Nginx, PM2, Docker, and system files using the Run LogCleanup Commands via SSH node. Usage Guide Import the workflow into n8n and configure email and SSH credentials. Test with a sample email alert to verify IP extraction and log deletion. Prerequisites Email service (e.g., IMAP or API) for alert monitoring SSH access with valid credentials Customization Options Modify the Prepare SSH Variables node to target specific log directories or adjust cleanup commands for different server setups.

Oneclick AI SquadBy Oneclick AI Squad
1110

Convert PDF articles to audio podcasts with Google TTS & Cloudflare R2

Convert PDF Articles to Podcast Workflow Name: Convert PDF Articles to Podcast Author: Devjothi Dutta Category: Productivity, Content Creation, Automation Complexity: Medium Setup Time: 45-60 minutes --- --- 📖 Description Transform any PDF article, research paper, or document into a high-quality audio podcast automatically. This workflow extracts text from PDFs, converts it to natural-sounding speech using Google Cloud Text-to-Speech, stores the audio files in cloud storage, and generates an RSS feed compatible with all major podcast apps (Apple Podcasts, Spotify, Pocket Casts, etc.). Perfect for consuming long-form content while commuting, exercising, or multitasking. Turn your reading list into a personal podcast feed. 👥 Who's it for For Professionals: Convert industry reports and whitepapers to audio Listen to research papers during commutes Stay updated with long-form articles hands-free For Students: Turn textbooks and study materials into audio Create audio versions of lecture notes Study while exercising or commuting For Content Creators: Repurpose written content into audio format Create podcast episodes from blog posts Reach audio-focused audiences For Busy Readers: Convert saved articles to a personal podcast Listen to newsletters and essays on the go Build a private audio library ✨ Key Features 📄 PDF Text Extraction - Automatically extracts text from any PDF file 🎙️ Natural Voice Synthesis - High-quality WaveNet voices from Google Cloud TTS ☁️ Cloud Storage - Files hosted on Cloudflare R2 (S3-compatible) with public URLs 📻 RSS Feed Generation - Full iTunes-compatible podcast feed with metadata 📧 Email Notifications - Instant alerts when new episodes are ready 🎨 Custom Branding - Configurable podcast name, artwork, and descriptions ⚙️ Modular Configuration - Easy-to-update centralized config node 🔄 Automated Workflow - Set it and forget it - fully automated pipeline 🛠️ Requirements Required Services: n8n (self-hosted or cloud) - Workflow automation platform Google Cloud Platform - Text-to-Speech API access Free tier: 1 million characters/month (WaveNet voices) Paid: $16 per 1 million characters Cloudflare R2 - Object storage for audio files and RSS feed Free tier: 10GB storage, unlimited egress Email Service - SMTP or email service for notifications Required Community Nodes: Cloudflare R2 Storage (n8n-nodes-cloudflare-r2-storage) Install via: Settings → Community Nodes → Install Search for: n8n-nodes-cloudflare-r2-storage Important: Install this BEFORE importing the workflow Optional: Custom domain for podcast feed URLs Podcast artwork (3000x3000px recommended) 📦 What's Included This workflow package includes: Complete n8n workflow JSON (ready to import) Comprehensive setup guide Architecture documentation Configuration templates Credentials setup instructions Testing and validation checklist RSS feed customization guide Troubleshooting documentation 🚀 Quick Start Install community node (required): Go to Settings → Community Nodes → Install Search for: n8n-nodes-cloudflare-r2-storage Click Install and wait for completion Import workflow into your n8n instance Configure credentials: Google Cloud TTS API key Cloudflare R2 credentials (Access Key ID + Secret) SMTP email credentials Update Workflow Config node with your settings: R2 bucket name and public URL Podcast name and description Artwork URL Email recipient Test with a sample PDF to verify setup Add RSS feed URL to your podcast app 📊 Workflow Stats Nodes: 10 Complexity: Medium Execution Time: ~2-5 minutes per PDF (depends on length) Monthly Cost: $0-20 (depending on usage and free tiers) Maintenance: Minimal (set and forget) 🎨 Customization Options Change TTS voice (20+ English voices available) Adjust speech speed and pitch Customize RSS feed metadata Add custom intro/outro audio Configure file retention policies Set up webhook triggers for remote submission 🔧 How it Works User uploads PDF to n8n Text is extracted from PDF Text is sent to Google TTS API Audio file (.mp3) is generated Files uploaded to R2 storage: Original PDF Generated MP3 audio RSS feed is generated/updated with: Episode title (from PDF filename) Audio URL Duration and file size Publication date Rich HTML description RSS feed uploaded to R2 Email notification sent with episode details 💡 Pro Tips Voice Selection: Test different WaveNet voices to find your preferred style Batch Processing: Process multiple PDFs by running workflow multiple times Quality vs Cost: WaveNet voices sound better but cost more than Standard voices Storage Management: Set up R2 lifecycle rules to auto-delete old episodes Custom Domains: Use Cloudflare for custom podcast feed URLs 🔗 Related Workflows PDF to Email Summary Document Translation to Audio Blog RSS to Podcast Multi-language Audio Generation 📧 Support & Feedback For questions, issues, or feature requests: GitHub: PDF-to-Podcast---N8N Repository n8n Community Forum: Tag @devdutta Email: devjothi@gmail.com 📄 License MIT License - Free to use, modify, and distribute --- ⭐ If you find this workflow useful, please share your feedback and star the workflow! ---

Dev DuttaBy Dev Dutta
1002

🤖📝 Auto-document workflows with GPT-4o-mini sticky notes

🤖📝 Auto-Document Workflows with GPT-4o-mini Sticky Notes Skip the tedious part of writing documentation and turn your n8n workflows into clear, shareable blueprints — fully automated. This workflow takes any workflow JSON, parses its nodes, generates structured sticky notes (both per-node and a general overview), and arranges them neatly on your canvas. No more messy layouts or missing documentation — everything is handled in one click. It’s perfect if you want to publish to the n8n marketplace, onboard teammates quickly, or just keep your own automations easy to understand months later. --- 💡 What this workflow does ✅ Loads your existing workflow from a JSON file 🔍 Parses and unwraps real nodes (ignoring old stickies) 🤖 Uses AI to create concise sticky notes for each node 📝 Adds a general overview sticky with goals, flow, parameters, and gotchas 📐 Arranges all nodes + stickies (node above, sticky below, right-to-left) 💾 Saves a new documented workflow JSON, ready to reuse or share --- ⚙️ Step-by-step setup Prepare your workflow file Export your n8n workflow JSON or point to an existing file path. Configure the “Load Workflow” node Update the file selector to your JSON path, e.g. /workflows/myflow.json. Add your OpenAI credentials In the OpenAI API nodes (Node Sticky Notes + Overall Sticky Note), insert your API key. Run the workflow Trigger manually with the Execute Workflow node. The script will parse your nodes, generate stickies, and align them on the canvas. Save the result The “Save Documented Workflow” node writes a new file, e.g. /workflows/myflow-with-sticky.json. --- 🛠 Customization Sticky layout: Adjust spacing, colors, and alignment in the Layout Blocks RTL node (tweak GAPX, GAPY, or STICKY_COLOR). Word count & style: Edit prompts inside the OpenAI nodes to make notes shorter, longer, or more technical. Overview focus: Customize the Your Workflow Description node to pass context (e.g., project goals, intended audience). File outputs: Save to a new path/version for version control of your documentation. --- ⚠️ Limitations / Gotchas Maximum of ~50 nodes are summarized in the overview for brevity. Old sticky notes are removed and replaced — you can’t preserve them unless you fork the workflow. Complex nodes (large Code / AI prompts) may require manual edits for clarity. Ensure n8n has read/write access to your workflow JSON paths. --- 🎯 Expected result After execution, you’ll get a fully documented workflow JSON where each node is paired with a clean sticky note, plus an overview note neatly placed on the canvas. You can open this new file in n8n, share it, or submit it directly to the marketplace. --- 📬 Contact & Feedback Need help customizing this? Have ideas for improvement? 📩 Luis.acosta@news2podcast.com 🐦 DM me on Twitter @guanchehacker If you’re working on advanced workflow documentation + AI, let’s talk — this template can be a foundation for more powerful tools.

Luis AcostaBy Luis Acosta
960

Register users to an event on Demio via Typeform

This workflow allows you to register your audience to an event on Demio via a Typeform submission. Typeform Trigger node: This node will trigger the workflow when a form response is submitted. Based on your use-case, you may use a different platform. Replace the Typeform Trigger node with a node of that platform. Demio node: This node registers a user for an event. It gets the details of the users from the Typeform response.

Harshil AgrawalBy Harshil Agrawal
683

Monitor solar energy production & send alerts with Gmail, Google Sheets, and Slack

Solar Energy Production Monitoring Alert Workflow This workflow automatically monitors solar energy production every 2 hours by fetching data from the Energidataservice API. If the energy output falls below a predefined threshold, it instantly notifies users via email. Otherwise, it logs the data into a Google Sheet and posts a daily summary to Slack. Who’s It For Renewable energy teams monitoring solar output. Facility managers and power plant supervisors. ESG compliance officers tracking sustainability metrics. Developers or analysts automating solar energy reporting. How It Works Trigger: The workflow starts every 2 hours using a Schedule Trigger. Data Fetch: An HTTP Request node fetches solar energy production data from the Energidataservice API. Processing: A Code node filters out entries with production below the minimum threshold. Decision Making: An If node checks whether any low-production entries are present. Alerts: If low-production is detected, an email is sent via the Gmail node. Logging: If all entries are valid, they are logged into a Google Sheet. Slack Summary: A Slack node posts the summary sheet data for end-of-day visibility. How to Set Up Schedule Trigger: Configure to run every 2 hours. HTTP Request Node: Method: GET URL: https://api.energidataservice.dk/dataset/YourDatasetHere Add necessary headers and params as required by the API. Code Node: Define logic to filter entries where solarenergyproduction < required_threshold. If Node: Use items.length > 0 to check for low-production entries. Gmail Node: Auth with Gmail credentials. Customize recipient and message template. Google Sheets Node: Connect to a spreadsheet. Map appropriate columns. Slack Node: Use Slack OAuth2 credentials. Specify channel and message content. Requirements n8n Cloud or Self-hosted instance. Access to Energidataservice API. Gmail account (with n8n OAuth2 integration). Google Sheets account & sheet ID. Slack workspace and app with appropriate permissions. How to Customize Change Frequency: Adjust the Schedule Trigger interval (e.g., every hour or 4x per day). Threshold Tuning: Modify the value in the Code node to change the minimum acceptable solar production. Alert Routing: Update Gmail recipients or replace Gmail with Microsoft Outlook/SendGrid. Sheet Format: Add or remove columns in the Google Sheet based on extra metrics (e.g., wind or nuclear data). Slack Posting: Customize Slack messages using Markdown for improved readability. Add‑ons Telegram Node: Send alerts to a Telegram group instead of email. Discord Webhook: Push updates to a Discord channel. n8n Webhook Trigger: Extend it to receive external production update notifications. Integromat/Make or Zapier: For multi-platform integration with CRMs or ticketing tools. Use Case Examples Utility Companies: Automatically detect and act on solar underperformance to maintain grid stability. Solar Farm Operators: Log clean production data for auditing and compliance reports. Sustainability Teams: Track daily performance and anomalies without manual checks. Home Solar System Owners: Get notified if solar generation drops below expected. Common Troubleshooting | Issue | Possible Cause | Solution | | -------------------------------------- | -------------------------------------- | ------------------------------------------------------------------- | | HTTP Request fails | API key missing or URL is incorrect | Check API endpoint, parameters, and authentication headers | | Gmail not sending alerts | Missing or invalid Gmail credentials | Re-authenticate Gmail OAuth2 in n8n credentials | | No data getting logged in Google Sheet | Incorrect mapping or sheet permissions | Ensure the sheet exists, columns match, and credentials are correct | | Slack node fails | Invalid token or missing channel ID | Reconnect Slack credentials and check permissions | | Code node returns empty | Filter logic may be too strict | Validate data format and relax the threshold condition | Need Help? Need help setting this up or customizing it for your own solar or energy monitoring use case? ✅ Set it up on your n8n Cloud or self-hosted instance ✅ Customize it for your own API or data source ✅ Modify alerts to suit your internal tools (Teams, Discord, SMS, etc.) 👉 Just reach out to our n8n automation team at WeblineIndia, we'll be happy to help.

WeblineIndiaBy WeblineIndia
517

Enhance AI chatbot responses with InfraNodus knowledge graph reasoning

Augment AI chatbot prompts with a knowledge graph reasoning ontology and improve the quality of responses with Graph RAG. In this workflow, we augment the original prompt using the InfraNodus GraphRAG system that will extract a reasoning ontology from a graph that you create (or that you can copy from our repository of public graphs). This additional reasoning logic will improve the user's prompt and make it more descriptive and closely related to the logic you want to use. As the next step, you can send it back to the same graph to generate a high-quality response using Graph RAG or to another graph (or AI model) to apply one type of knowledge in a completely different field. How it works Receives a request from a user (via n8n or a publicly available URL chat bot, you can also connect it to Telegram Sends the request to the knowledge graph in your InfraNodus account that contains a reasoning ontology represented as a knowledge graph. Reformulates the original prompt to include the reasoning logic provided. Sends the request to the knowledge graph in your InfraNodus account (same as the previous one or a new one for cross-disciplinary research) to retrieve a high-quality response using GraphRAG Special sauce: InfraNodus will build a graph from your augmented prompt, then overlap it on the knowledge graph you want to inquire, traverse this graph based on the overlapped parts and extended relations, then retrieve the necessary part of the graph and include it in the context to improve the quality of your response. This helps InfraNodus grasp the relations and nuances that are not usually available through standard RAG. How to use • Just get an InfraNodus API key and add it into your Prompt Augmentation and Knowledge Base InfraNodus HTTP nodes for authentication • Then provide the name of the graphs you want to be using for prompt augmentation and retrieval. Note, these can be two different graphs if you want to apply a reasoning logic from one domain in another (e.g. machine learning in biology or philosophy in electrical engineering). Support If you wan to create your own reasoning ontology graphs, please, refer to this article on generating your own knowledge graph ontologies. You may also be interested to watch this video that explains the logic of this approach in detail: [](https://www.youtube.com/watch?v=jhqBb3nuyAY) Help article on the same topic: Using knowledge graphs as reasoning experts.

InfraNodusBy InfraNodus
506

Expense Logging with Telegram to Google Sheets using AI Voice & Text Parsing

Use Cases -Personal or family budget tracking. -Small business expense logging via Telegram -Hands-free logging (using voice messages) --- How it works: -Trigger receives text or voice. -Optional branch transcribes audio to text. -AI parses into a structured array (SOP enforces schema). -Split Out produces 1 item per expense. -Loop Over Items appends rows sequentially with a Wait, preventing missed writes. -In parallel, Item Lists (Aggregate) builds a single summary string; Merge (Wait for Both) releases one final Telegram confirmation. ---- Setup Instructions Connect credentials: Telegram, Google, OpenAI. Sheets: Create a sheet with headers Date, Category, Merchant, Amount, Note. Copy Spreadsheet ID + sheet name. Map columns in Append to Google Sheet. Pick models: set Chat model (e.g., gpt-4o-mini) and Whisper for transcription if using audio. Wait time: keep 500–1000 ms to avoid API race conditions. Run: Send a Telegram message like: Gas 34.67, Groceries 82.45, Coffee 6.25, Lunch 14.90. ---- Customization ideas: -Add categories map (Memory/Set) for consistent labeling. -Add currency detection/formatting. -Add error-to-Telegram path for invalid schema.

Calvin CunninghamBy Calvin Cunningham
426