33 templates found
Category:
Author:
Sort:

Conversational Telegram bot with GPT-5/GPT-4o for text and voice messages

This n8n workflow leverages a Telegram Message Trigger to activate an intelligent AI Agent capable of processing both text and voice messages. When a user sends a message in text or in voice format, the workflow captures and transcribes it (if necessary), then passes it to the AI Agent for understanding and response generation. To enhance user experience, the bot also displays a typing indicator while processing requests, simulating a natural, human-like interaction. Key Features Multi-Modal Input: Supports both text messages and voice notes from users. Real-Time Interaction: Shows a “typing…” action in Telegram while the AI processes the input. AI Agent Integration: Provides intelligent, context-aware, and conversational responses. Seamless Feedback Loop: Replies are sent directly back to the user within Telegram for smooth interaction. How It Works The workflow triggers whenever a message or voice note is received on Telegram. If the input is a voice note, the workflow transcribes it into text. The text input is sent to the AI Agent for processing. While processing, the bot sends a typing indicator to the user. Once the AI generates a response, the workflow sends it back to the user in Telegram. Setup Instructions Create a Telegram Bot: Use @BotFather to create a bot and obtain your bot token. Configure n8n Credentials: Add Telegram API credentials in n8n with your bot token. Add credentials for any speech-to-text service used for voice transcription (e.g., Open AI Transcribe A Recording). Import the Workflow: Import this workflow into your n8n instance. Update all credential nodes to use your Telegram and transcription service credentials. Set Webhook URLs: Ensure Telegram webhook is set properly for your bot to receive messages. Make sure your n8n instance is publicly accessible for Telegram callbacks. Test the Workflow: Send text messages and voice notes to your Telegram bot and observe the AI responses. Customization Guidance Add new message handlers: Extend the workflow to handle additional message types (images, documents, etc.). Improve transcription: Swap or add speech-to-text services for better accuracy or language support. Enhance AI Agent: Customize prompts and context management to tailor the AI’s personality and responses. AI Model Flexibility: Swap between different AI models (e.g., GPT-5, GPT-4, Claude, or custom LLMs) based on task type, cost, or performance preferences. By default, I use GPT-4o in this template. However, you can use the latest GPT-5 model by changing them in OpenAI Chat Model Node. It will show you a list of all the available models to choose. Tool-Based Control: Add custom tools to the AI Agent such as calendar access, Notion, Google Sheets, web search, database queries, or custom APIs—allowing for dynamic, multi-functional agents Security and Implementation Notes The Telegram node manages message reception and sending but does not directly handle AI processing. Voice transcription requires integration with external APIs; secure those credentials in n8n and monitor usage. To simulate typing, the workflow uses Telegram’s “sendChatAction” API method, providing users with feedback that the bot is processing. Ensure your AI API keys and Telegram tokens are securely stored in n8n credentials and not exposed in workflows or logs. Benefits Handles natural conversational inputs with text or voice. Provides a smooth, engaging user experience via typing indicators. Easy integration of advanced AI conversational agents with Telegram. Flexible for personal assistants, helpdesks, or interactive chatbots.

RoninimousBy Roninimous
87881

Anthropic AI agent: Claude Sonnet 4 and Opus 4 with Think and Web Search tool

This workflow dynamically chooses between two new powerful Anthropic models — Claude Opus 4 and Claude Sonnet 4 — to handle user queries, based on their complexity and nature, maintaining scalability and context awareness with Anthropic web search function and Think tool. --- Key Advantages 🔁 Dynamic Model Selection Automatically routes each user query to either Claude Sonnet 4 (for routine tasks) or Claude Opus 4 (for complex reasoning), ensuring optimal performance and cost-efficiency. 🧠 AI Agent with Tool Use The AI agent can utilize a web search tool to retrieve up-to-date information and a Think tool for complex reasoning processes — improving response quality. 📎 Memory Integration Uses session-based memory to maintain conversational context, making interactions more coherent and human-like. 🧮 Built-in Calculation Tool Handles numeric queries using an integrated calculator tool, reducing the need for external processing. 📤 Structured Output Parser Ensures outputs are always well-structured and formatted in JSON, which improves consistency and downstream integrations. 🌐 Web Search Capability Supports real-time information retrieval for current events, statistics, or details not available in the AI’s base knowledge. --- Components Overview Trigger: Listens for new chat messages. Routing Agent: Analyzes the message and returns the best model to use. AI Agent: Handles the conversation, decides when to use tools. Tools: web_search for internet queries Think for reasoning Calculator for math tasks Models Used: claude-sonnet-4-20250514: Optimized for general and business logic tasks. claude-opus-4-20250514: Best for deep, strategic, and analytical queries. --- How It Works Dynamic Model Selection The workflow begins when a chat message is received. The Anthropic Routing Agent analyzes the user's query to determine the most suitable model (either Claude Sonnet 4 or Claude Opus 4) based on the query's complexity and requirements. The routing agent uses predefined criteria to decide: Claude Sonnet 4: Best for standard tasks like real-time workflow routing, data validation, and routine business logic. Claude Opus 4: Reserved for complex scenarios requiring deep reasoning, advanced analysis, or high-impact decisions. Query Processing and Response Generation The selected model processes the query, leveraging tools like web_search for real-time information retrieval, Think for internal reasoning, and Calculator for numerical tasks. The AI Agent coordinates these tools, ensuring the response is accurate and context-aware. A Simple Memory node retains session context for coherent multi-turn conversations. The final response is formatted and returned to the user without intermediate steps or metadata. --- Set Up Steps Node Configuration Trigger: Configure the "When chat message received" node to handle incoming user queries. Routing Agent: Set up the "Anthropic Routing Agent" with the system message defining model selection logic. Ensure it outputs a JSON object with prompt and model fields. AI Model Nodes: Link the "Sonnet 4 or Opus 4" node to dynamically use the selected model. The "Sonnet 3.7" node powers the routing agent itself. Tool Integration Attach the "web_search" HTTP tool to enable internet searches, ensuring the API endpoint and headers (e.g., anthropic-version) are correctly configured. Connect auxiliary tools (Think, Calculator) to the "AI Agent" for extended functionality. Add the "Simple Memory" node to maintain conversation history. Credentials Provide an Anthropic API key to all nodes requiring authentication (e.g., model nodes, web search). Testing Activate the workflow and test with sample queries to verify: Correct model selection (e.g., Sonnet for simple queries, Opus for complex ones). Proper tool usage (e.g., web searches trigger when needed). Memory retention across chat turns. Deployment Once validated, set the workflow to active for live interactions. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
8903

Process large documents with OCR using SubworkflowAI and Gemini

Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation: https://docs.subworkflow.ai/category/api-reference Join the discord: https://discord.gg/RCHeCPJnYw

JimleukBy Jimleuk
8424

Rate limiting and waiting for external events

Task: Control your data flow with rate limits and external cues Main use cases: Control the rate of items flow into one or more services in your workflow Wait for external events to occur before continuing with the rest of the workflow

JonathanBy Jonathan
6607

Send a message via a Lark Bot

What this workflow does This workflow in n8n demonstrates how to send a message in Lark using a Lark bot. It begins with a manual trigger and then retrieves the necessary Lark token via a POST request. The token is used to authenticate and send a message to a specific chat using the Lark API. The input node provides the required app_id, app_secret, chat_id, and message content. After obtaining the token, the message is sent with the Lark API's message/v4/send/ endpoint. Who This Is For This n8n workflow is ideal for organizations, teams, and developers who need to automate message sending within Lark, especially those managing notifications, alerts, or team reminders. It can help users reduce manual messaging tasks by leveraging a Lark bot to deliver messages at specific intervals or based on particular conditions, enhancing team communication and responsiveness. Setup Fill the Input node with your values Exchange the bearer token in the Send Message node with your token Author: Hiroshi

HiroshiBy Hiroshi
4048

2-way sync Notion and Google Calendar

This workflow syncs multiple Notion databases to your Google Calendar. And it works both ways. What events are supported? Everything except recurring events. All day events, multiple day events, start and end date… these are all supported. You set them in Notion and they stay in sync with Google. And vice versa. Why doesn’t it support recurring events? Notion doesn’t support recurring events yet. So when you create a recurring event in Google, it will only consider the first date, ignoring future occurrences of the event. Can I connect more than one Notion database? Yes. You can have many Notion databases synced to one Google Calendar account. You can see how to do it in the workflow instructions. It is recommended that you create more calendars in your account, so that you can link each calendar to a different database in Notion. But that’s a choice. What happens if I delete an event or page? Notion page deleted → Deletes event in Google Notion date property cleared → Deletes event in Google Google event deleted → Clears the date property in Notion, but keeps the page, so you don’t lose your work. Does it update the events? Yes. When you update the event in Google or in Notion it syncs both ways. How can I know what Notion item was linked to an event? Either by the name or by clicking the hyperlink in the event description that says: 👉 View in Notion. When I create a new event in Google, does it add an item to Notion? Yes. When you create an event inside one of your calendars, the item is synced to the corresponding Notion database. Does it sync event descriptions? No. The event description will always be “View in Notion”. Even if you change it in Google Calendar it will be overwritten when you make a change to the Notion page. 🎉 When you buy this template you receive step-by-step instructions on how to set it up. Check out my other templates 👉 https://n8n.io/creators/solomon/

SolomonBy Solomon
3019

Search and compare flights with DeepSeek AI and Google Flights API

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works Takes departure city, destination, and travel dates from the user. Searches multiple airlines for flight options and compares price, duration, and stops. Suggests flexible travel dates for better deals. Tracks selected flights and sends real-time price alerts. Provides 24/7 AI-powered travel recommendations. Set up steps Add credentials for your chosen Chat Model (DeepSeek in this case) and SerpAPI (Google Flights). In the AI Agent node, link: Chat Model → DeepSeek Chat Model node. Memory → Simple Memory node (for conversation context). Tool → Google_flights search in SerpApi node. In the SerpApi node, set engine=google_flights and map input fields for departure, destination, and travel dates. Test the workflow by providing a sample itinerary request in the Chat node’s input. Review AI responses to ensure it searches, compares, and returns relevant flight options.

Fakhar KhanBy Fakhar Khan
1735

Build a WhatsApp assistant for text, audio & images using GPT-4o & Evolution API

Build an intelligent WhatsApp assistant that automatically responds to customer messages using AI. This template uses the Evolution API community node for WhatsApp integration and OpenAI for natural language processing, with built-in conversation memory powered by Redis to maintain context across messages. > ⚠️ Self-hosted requirement: This workflow uses the Evolution API community node, which is only available on self-hosted n8n instances. It will not work on n8n Cloud. What this workflow does Receives incoming WhatsApp messages via Evolution API webhook Filters and processes text, audio, and image messages Transcribes audio messages using OpenAI Whisper Analyzes images using GPT-4 Vision Generates contextual responses with conversation memory Sends replies back through WhatsApp Who is this for? Businesses wanting to automate customer support on WhatsApp Teams needing 24/7 automated responses with AI Developers building multimodal chat assistants Companies looking to reduce response time on WhatsApp Setup instructions Evolution API: Install and configure Evolution API on your server. Create an instance and obtain your API key and instance name. Redis: Set up a Redis instance for conversation memory. You can use a local installation or a cloud service like Redis Cloud. OpenAI: Get your API key from platform.openai.com with access to GPT and Whisper models. Webhook: Configure your Evolution API instance to send webhooks to your n8n webhook URL. Customization options Modify the system prompt in the AI node to change the assistant's personality and responses Adjust the Redis TTL to control how long conversation history is retained Add additional message type handlers for documents, locations, or contacts Integrate with your CRM or database to personalize responses Credentials required Evolution API credentials (self-hosted) OpenAI API key Redis connection

Antonio GassoBy Antonio Gasso
1686

Find & verify business emails automatically with OpenRouter, Serper & Prospeo

Who is this template for? Growth teams, SDRs, recruiters, or anyone who routinely hunts for hard‑to‑find business emails and would rather spend time reaching out than guessing formats. What problem does this workflow solve? Manually piecing together email patterns, cross‑checking them in a verifier, and updating a tracking sheet is slow and error‑prone. This template automates the entire loop—research, guess, verify, and log—so you hit Start and watch rows fill up with ready‑to‑send addresses. What this workflow does Pull fresh leads – Grabs only the rows in your Google Sheet where Status = FALSE. Find the company pattern – Queries Serper.dev for snippets and feeds them to Gemini Flash (via OpenRouter) to spot the dominant email format. Build the address – Constructs a likely email for every first/last name. Verify in real time – Pings Prospeo by default (API) or lets you bulk‑clean in Sparkle.io. Write it back – Updates the sheet with pattern, email, confidence, verification status, and flips Status to TRUE. Loop until done – Runs batch‑by‑batch so you never hit API limits. --- 🆓 Work free‑tier magic (up to \~2,500 contacts/month) | Service | Free allowance | How this template uses it | | -------------- | ----------------------------- | ------------------------------------------------------------------------------------ | | Serper.dev | 2,500 searches/mo | Scrapes three public email snippets per domain to learn the pattern | | Sparkle.io | 10,000 bulk verifications/day | Manual upload‑download option—perfect to clean your first 2.5k emails at zero cost | | Prospeo | 75 API calls/mo | Built‑in if you prefer fully automated verification | Quick Sparkle workflow: Let the template generate emails. Export the “Email” column to CSV → upload to Sparkle.io. Download the results and paste the "verification\_status" back into the sheet (or add a small n8n import sub‑flow). --- Setup (5 minutes) Copy the Google Sheet linked in the sticky note and paste its ID into the Get Rows and Update Rows nodes. Add credentials for Google Sheets, Serper (X‑API‑KEY), OpenRouter, and optionally Prospeo. Hit Execute Workflow—that’s it. --- How to customise Prefer Sparkle for volume: Skip the Prospeo node, export emails in one click, bulk‑verify in Sparkle, and re‑import results. Swap the search source: Replace the Get Email Pattern* HTTP node with Bing, Brave, etc. Extend enrichment: Add phone look‑ups or LinkedIn scrapers before the Update Rows* node. Auto‑run: Replace the Manual Trigger with a Cron node so the sheet cleans itself every morning. --- Additional resources | Tool | Purpose | Link | | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | -------------------------------------------------------- | | Prospeo – API‑ready email verification<br><sub>Special offer: 20 % free credits for the first 3 months on any plan using this link!</sub> | Real‑time, single‑call mailbox validation | prospeo.io | | Sparkle.io – high‑volume bulk verifier (manual upload) | Free daily quota of 10 000 verifications | app.sparkle.io/sign‑up | | OpenRouter – API gateway for Gemini Flash & other LLMs | One key unlocks multiple frontier models | openrouter.ai | | Serper.dev – Google Search API | 2 500 searches/month on the free tier | serper.dev | Add the relevant keys or signup details from these links, drop them into the matching n8n credentials, and you’re all set to enrich your first 2 500 contacts at zero cost. Happy building!

Naveen ChoudharyBy Naveen Choudhary
1527

Enrich CRM contact data with LinkedIn profiles using GPT & multi-CRM support

⚠️ DISCLAIMER: This workflow uses the AnySite LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account AnySite API key https://AnySite.io Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) AnySite LinkedIn: Add your API key from https://AnySite.io CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedinpostssummary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDWCRMEnrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering: Verify webhook configuration in your CRM Missing Custom Fields: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by AnySite LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance

AndreyBy Andrey
1020

Automatically send a direct message (DM) to new followers on Bluesky using Baserow

Send personalized welcome messages to your new Bluesky followers automatically, helping you maintain engagement while saving time. This workflow monitors your follower list and sends customized direct messages, creating a warm welcome for new connections without manual intervention. How it works Checks your Bluesky followers list daily at 9 AM Identifies new followers by comparing against a database Extracts the follower's first name when available Sends a personalized welcome message with optional link Prevents duplicate messages through double-verification Maintains a record of sent messages to avoid repetition Set up steps (10-15 minutes) Create a Bluesky account if you haven't already Generate an app password in your Bluesky settings Enter your Bluesky handle and app password in the "Set Bluesky Credentials" node Set up your database (Baserow, or adapt for Airtable/Google Sheets) Customize your welcome message in the "Create Welcome Message" node Optional: Adjust the regular check time (default: 9 AM) Features Personalized messaging using follower's first name when available Database tracking to prevent duplicate messages Basic rate limiting protection to stay within API limits Customizable welcome message with clickable links Ability to handle up to 100 new followers per check Perfect for creators who want to Welcome new followers consistently Save time on manual messaging Build early engagement with followers Share important links or resources Maintain a professional presence Scale their community management Suggested enhancements Add message templates for different follower types Include email/Slack notifications for errors Add analytics tracking for message success rates Implement dynamic timing based on follower activity Create A/B testing for different welcome messages Add follower segmentation based on profile data

Gareth B. DaviesBy Gareth B. Davies
782

Upload & rename videos to Google Drive via Apps Script from URL

📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. --- 🚀 Setup Steps: Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader --- Paste the Script Code Replace the default Code.gs content with the following (your custom script): javascript function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOURFOLDERID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } --- Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save js const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: js var folderId = "1Xabc12345678defGHIJklmn"; --- Set Up Deployment as Web App Click “Deploy” &gt; “Manage Deployments” &gt; “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL --- 📤 How to Use in n8n HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: json { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key Rename Node A simple drive update node to rename the file using the file drive url returned from the script. ---

JosephBy Joseph
754