11 templates found
Category:
Author:
Sort:

WhatsApp to Chatwoot message forwarder with media support

Description Automates the forwarding of messages from WhatsApp (via Evolution API) to Chatwoot, enabling seamless integration between external WhatsApp users and internal Chatwoot agents. It supports both text and media messages, ensuring that customer conversations are centralized and accessible for support teams. What Problem Does This Solve? Managing conversations across multiple platforms can lead to fragmented support and lost context. This subworkflow bridges the gap between WhatsApp and Chatwoot, automatically forwarding messages received via the Evolution API to a Chatwoot inbox. It simplifies communication flow, centralizes conversations, and enhances the support team's productivity. Features Support for plain text messages Support for media messages: images, videos, documents, and audio Automatic media upload to Chatwoot with proper attachment rendering Automatic contact association using WhatsApp number and Chatwoot API Designed to work with Evolution API webhooks or any message source Prerequisites Before using this automate, make sure you have: Evolution API credentials with incoming message webhook configured A Chatwoot instance with access token and API endpoint An existing Chatwoot inbox (preferably API channel) A configured HTTP Request node in n8n for Chatwoot API calls Suggested Usage This subworkflow should be attached to a parent workflow that receives WhatsApp messages via the Evolution API webhook. Ideal for: Centralized customer service operations WhatsApp-to-CRM/chat routing Hybrid automation workflows where human agents need to reply from Chatwoot It ensures that all incoming WhatsApp messages are properly converted and forwarded to Chatwoot, preserving message content and structure.

Thiago Vazzoler LoureiroBy Thiago Vazzoler Loureiro
2581

Convert newsletters into AI podcasts with GPT-4o Mini and ElevenLabs

🎧 Convert Unread Newsletters into Conversational AI Podcasts Turn email overload into audio insights — automatically. This workflow transforms unread newsletters sitting in your inbox into engaging, human-like audio conversations between two AI voices. It’s perfect for listening during your commute, workout, or while multitasking. Inspired by Google's NotebookLM, this automation brings long-form content to life by summarizing dense text into a natural dialogue using OpenAI and generating high-quality voice narration with ElevenLabs. The result? A dynamic audio file sent right back to your inbox — hands-free, screen-free, and stress-free. --- 💡 What this workflow does ✅ Connects to your Gmail inbox to fetch unread newsletters 🤖 Uses GPT-4o Mini to summarize and rephrase content as a conversation 🗣️ Sends the dialogue to ElevenLabs to generate voice clips (voice1 + voice2) 🔁 Merges all audio segments into a single podcast-like MP3 using FFmpeg 📬 Emails the final audio back to you for easy listening --- 🛠️ What you'll need A Gmail account with IMAP enabled An OpenAI API key (GPT-4o Mini recommended for cost/performance) An ElevenLabs API key + selected voice IDs A self-hosted or local n8n instance with FFmpeg installed Basic knowledge of binary data and audio handling in n8n --- ✨ Use cases Convert long newsletters into hands-free listening experiences Repurpose Substack or Beehiiv content for podcast-like distribution Build an internal voice dashboard for teams who prefer audio updates --- 🙌 Want to go further? This workflow is modular and extensible. You can add steps to: Upload the final audio to Spotify, SoundCloud, or Telegram Publish to a private podcast RSS feed Create a daily audio digest from multiple newsletters --- 📬 Contact & Feedback Need help customizing it? Have ideas or feedback? Feel free to reach out: 📩 Luis.acosta@news2podcast.com If you're building something more advanced with audio + AI, like automated podcast publishing to Spotify — let me know and I’ll figure out how I can help you!

Luis AcostaBy Luis Acosta
1708

Manage group members in Bitwarden automatically

This workflow allows you to create a group, add members to the group, and get the members of the group. Bitwarden node: This node will create a new group called documentation in Bitwarden. Bitwarden1 node: This node will get all the members from Bitwarden. Bitwarden2 node: This node will update all the members in the group that we created earlier. Bitwarden3 node: This node will get all the members in the group that we created earlier.

Harshil AgrawalBy Harshil Agrawal
1093

Manage Odoo CRM with natural language using OpenAI and MCP Server

Odoo CRM MCP Server Workflow 📖 Overview This workflow connects an AI Agent with Odoo CRM using the Model Context Protocol (MCP). It allows users to manage CRM data in Odoo through natural language chat commands. The assistant interprets the user’s request, selects the appropriate Odoo action, and executes it seamlessly. 🔹 Key Features Contacts Management: Create, update, delete, and retrieve contacts. Opportunities Management: Create, update, delete, and retrieve opportunities. Notes Management: Create, update, delete, and retrieve notes. Conversational AI Agent: Understands natural language and maps requests to Odoo actions. Model Used: OpenAI Chat Model. This makes it easy for end-users to interact with Odoo CRM without needing technical commands—just plain language instructions. --- ▶️ Demo Video Watch the full demo here: 👉 YouTube Demo Video --- ⚙️ Setup Guide Follow these steps to set up and run the workflow: Prerequisites An Odoo instance configured with CRM enabled. An n8n or automation platform account where MCP workflows are supported. An OpenAI API key with access to GPT models. MCP Server installed and running. Import the Workflow Download the provided workflow JSON file. In your automation platform (n8n, Langflow, or other MCP-enabled tool), choose Import Workflow. Select the JSON file and confirm. Configure MCP Server Go to your MCP Server Trigger node in the workflow. Configure it to connect with your Odoo instance. Set API endpoint. Provide authentication credentials (API key). Test the connection to ensure the MCP server can reach Odoo. Configure the OpenAI Model Select the OpenAI Chat Model node in the workflow. Enter your OpenAI API Key. Choose the model (e.g., gpt-5 or gpt-5-mini). AI Agent Setup The AI Agent node links the Chat Model, Memory, and MCP Client. Ensure the MCP Client is mapped to the correct Odoo tools (Contacts, Opportunities, Notes). The System Prompt defines assistant behavior—use the tailored system prompt provided earlier. Activate and Test Turn the workflow ON (toggle Active). Open chat and type: "Create a contact named John Doe with email john@example.com." "Show me all opportunities." "Add a note to John Doe saying 'Follow-up scheduled for Friday'." Verify the results in your Odoo CRM. --- ✅ Next Steps Extend functionality with Tasks, Stages, Companies, and Communication Logs for a complete CRM experience. Add confirmation prompts for destructive actions (delete contact/opportunity/note). Customize the AI Agent’s system prompt for your organization’s workflows. ---

Rohit DabraBy Rohit Dabra
873

Daily RAG research paper hub with arXiv, Gemini AI, and Notion

Fetch user-specific research papers from arXiv on a daily schedule, process and structure the data, and create or update entries in a Notion database, with support for data delivery Paper Topic: single query keyword Update Frequency: Daily updates, with fewer than 20 entries expected per day Tools: Platform: n8n, for end-to-end workflow configuration AI Model: Gemini-2.5-Flash, for daily paper summarization and data processing Database: Notion, with two tables — Daily Paper Summary and Paper Details Message: Feishu (IM bot notifications), Gmail (email notifications) Data Retrieval arXiv API The arXiv provides a public API that allows users to query research papers by topic or by predefined categories. arXiv API User Manual Key Notes: Response Format: The API returns data as a typical Atom Response. Timezone & Update Frequency: The arXiv submission process operates on a 24-hour cycle. Newly submitted articles become available in the API only at midnight after they have been processed. Feeds are updated daily at midnight Eastern Standard Time (EST). Therefore, a single request per day is sufficient. Request Limits: The maximum number of results per call (max_results) is 30,000, Results must be retrieved in slices of at most 2,000 at a time, using the max_results and start query parameters. Time Format: The expected format is [YYYYMMDDTTTT+TO+YYYYMMDDTTTT], TTTT is provided in 24-hour time to the minute, in GMT. Scheduled Task Execution Frequency: Daily Execution Time: 6:00 AM Time Parameter Handling (JS): According to arXiv’s update rules, the scheduled task should query the previous day’s (T-1) submittedDate data. Data Extraction Data Cleaning Rules (Convert to Standard JSON) Remove Header Keep only the 【entry】【/entry】 blocks representing paper items. Single Item Each 【entry】【/entry】 represents a single item. Field Processing Rules 【id】【/id】 ➡️ id Extract content. Example: 【id】http://arxiv.org/abs/2409.06062v1【/id】 → http://arxiv.org/abs/2409.06062v1 【updated】【/updated】 ➡️ updated Convert timestamp to yyyy-mm-dd hh:mm:ss 【published】【/published】 ➡️ published Convert timestamp to yyyy-mm-dd hh:mm:ss 【title】【/title】 ➡️ title Extract text content 【summary】【/summary】 ➡️ summary Keep text, remove line breaks 【author】【/author】 ➡️ author Combine all authors into an array Example: [ "Ernest Pusateri", "Anmol Walia" ] (for Notion multi-select field) 【arxiv:comment】【/arxiv:comment】 ➡️ Ignore / discard 【link type="text/html"】 ➡️ html_url Extract URL 【link type="application/pdf"】 ➡️ pdf_url Extract URL 【arxiv:primarycategory term="cs.CL"】 ➡️ primarycategory Extract term value 【category】 ➡️ category Merge all 【category】 values into an array Example: [ "eess.AS", "cs.SD" ] (for Notion multi-select field) Add Empty Fields github huggingface Data Processing Analyze and summarize paper data using AI, then standardize output as JSON. Single Paper Basic Information Analysis and Enhancement Daily Paper Summary and Multilingual Translation Data Storage: Notion Database Create a corresponding database in Notion with the same predefined field names. In Notion, create an integration under Integrations and grant access to the database. Obtain the corresponding Secret Key. Use the Notion "Create a database page" node to configure the field mapping and store the data. Notes "Create a database page" only adds new entries; data will not be updated. The updated and published timestamps of arXiv papers are in UTC. Notion single-select and multi-select fields only accept arrays. They do not automatically parse comma-separated strings. You need to format them as proper arrays. Notion does not accept null values, which causes a 400 error. Data Delivery Set up two channels for message delivery: EMAIL and IM, and define the message format and content. Email: Gmail GMAIL OAuth 2.0 – Official Documentation Configure your OAuth consent screen Steps: Enable Gmail API Create OAuth consent screen Create OAuth client credentials Audience: Add Test users under Testing status Message format: HTML (Model: OpenAI GPT — used to design an HTML email template) IM: Feishu (LARK) Bots in groups Use bots in groups

dongouBy dongou
696

Automate loan document analysis with Mistral OCR and GPT for underwriting decisions

LOB Underwriting with AI This template ingests borrower documents from OneDrive, extracts text with OCR, classifies each file (ID, paystub, bank statement, utilities, tax forms, etc.), aggregates everything per borrower, and asks an LLM to produce a clear underwriting summary and decision (plus next steps). Good to know AI and OCR usage consume credits (OpenAI + your OCR provider). Folder lookups by name can be ambiguous—use a fixed folderId in production. Scanned image quality drives OCR accuracy; bad scans yield weak text. This flow handles PII—mask sensitive data in logs and control access. Start small: batch size and pagination keep costs/memory sane. How it works Import & locate docs: Manual trigger kicks off a OneDrive folder search (e.g., “LOBs”) and lists files inside. Per-file loop: Download each file → run OCR → classify the document type using filename + extracted text. Aggregate: Combine per-file results into a borrower payload (make BorrowerName dynamic). LLM analysis: Feed the payload to an AI Agent (OpenAI model) to extract underwriting-relevant facts and produce a decision + next steps. Output: Return a human-readable summary (and optionally structured JSON for systems). How to use Start with the Manual Trigger to validate end-to-end on a tiny test folder. Once stable, swap in a Schedule/Cron or Webhook trigger. Review the generated underwriting summary; handle only flagged exceptions (unknown/unreadable docs, low confidence). Setup steps Connect accounts Add credentials for OneDrive, OCR, and OpenAI. Configure inputs In Search a folder, point to your borrower docs (prefer folderId; otherwise tighten the name query). In Get items in a folder, enable pagination if the folder is large. In Split in Batches, set a conservative batch size to control costs. Wire the file path Download a file must receive the current file’s id from the folder listing. Make sure the OCR node receives binary input (PDFs/images). Classification Update keyword rules to match your region/lenders/utilities/tax forms. Keep a fallback Unknown class and log it for review. Combine Replace the hard-coded BorrowerName with: a Set node field, a form input, or parsing from folder/file naming conventions. AI Agent Set your OpenAI model/credentials. Ask the model to output JSON first (structured fields) and Markdown second (readable summary). Keep temperature low for consistent, audit-friendly results. Optional outputs Persist JSON/Markdown to Notion/Docs/DB or write to storage. Customize if needed Doc types: add/remove categories and keywords without touching core logic. Error handling: add IF paths for empty folders, failed downloads, empty OCR, or Unknown class; retry transient API errors. Privacy: redact IDs/account numbers in logs; restrict execution visibility. Scale: add MIME/size filters, duplicate detection, and multi-borrower folder patterns (parent → subfolders).

Vinay GangidiBy Vinay Gangidi
471

Smart shipping prioritization with Google Gemini and Google Sheets

How it works This template waits for an external button to be pressed via webhook, then reads a Google Sheet with pending shipments. The sheet contains the columns: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. It determines the next shipment using Google Gemini Flash 2.5, considering not only the date but also the customer’s comments. Once the next shipment is selected, the column “enviado” is updated with an X, and the shipping information is forwarded to Unihiker’s n8n Terminal. Setup Create a new Google Sheet and name it "Shipping". Add the following column headers in the first row: idEnvio, fechaOrden, nombre, direccion, detalle, and enviado. Connect your Google Sheets and Google Gemini credentials. In your n8n workflow, select the Shipping sheet in the Google Sheets node. Copy the webhook URL and paste it into the .ino code for your Unihiker n8n Terminal. 🚀

Roni BandiniBy Roni Bandini
320

Extract & organize email invoices with Gmail, Drive & OpenAI GPT

Who’s it for This template is for founders, finance teams, and solo operators who receive lots of invoices by email and want them captured automatically in a single, searchable source of truth. If you’re tired of hunting through your inbox for invoice PDFs or “that one receipt from three months ago,” this is for you. What it does / How it works The workflow polls your Gmail inbox on a schedule and fetches new messages including their attachments. A JavaScript Code node restructures all attachments, and a PDF extraction node reads any attached PDFs. An AI “Invoice Recognition Agent” then analyzes the email body and attachments to decide whether the email actually contains an invoice. If not, the workflow stops. If it is an invoice, a second AI “Invoice Data Extractor” pulls structured fields such as dateemail, dateinvoice, invoicenr, description, provider, netamount, vat, gross_amount, label (saas/hardware/other), and currency. Depending on whether the invoice is in an attachment or directly in the email text, the workflow either: uploads the invoice file to Google Drive, or document a direct link to the mail, then appends/updates a row in Google Sheets with all invoice parameters plus a Drive link, and finally marks the Gmail message as read. How to set up Add and authenticate: Gmail credentials Google Sheets credentials Google Drive credentials OpenAI (or compatible) credentials for the AI nodes Create or select a Google Sheet with the expected columns (dateemail, dateinvoice, invoicenr, description, provider, netamount, vat, gross_amount, label, currency, link). Create or select a Google Drive folder where invoices/docs should be stored. Adjust the Gmail Trigger filters (labels, search query, polling interval) to match the mailbox you want to process. Update node credentials and resource IDs (Sheet, Drive folder) via the node UIs, not hardcoded in HTTP nodes. Requirements n8n instance (cloud or self-hosted) Gmail account with OAuth2 setup Google Drive and Google Sheets access OpenAI (or compatible) API key configured in n8n Sufficient permissions to read emails, read/write Drive files, and edit the target Sheet How to customize the workflow Change invoice categories: Extend the label enum (e.g., add “services”, “subscriptions”) in the extraction schema and adjust any downstream logic. Refine invoice detection: Tweak the AI prompts to be more or less strict about what counts as an invoice or receipt. Add notifications: After updating the Sheet, send a Slack/Teams message or email summary for high-value invoices. Filter by sender or subject: Narrow the Gmail Trigger to specific vendors, labels, or keywords. Extend the data model: Add fields (e.g., cost center, project code) to the extractor prompt and Sheet mapping to fit your bookkeeping setup.

Feras DabourBy Feras Dabour
290

Automate sales order prioritization with ERP-WMS-TMS integration based on SLA tiers

This n8n workflow automates the prioritization and scheduling of sales orders based on customer SLAs, urgency, and profitability. It ensures that high-priority and SLA-critical orders are picked, packed, and dispatched first—improving fulfillment speed, customer satisfaction, and operational efficiency across warehouses and logistics. --- Key Features Automated Order Fetching: Periodically retrieves all pending or confirmed sales orders from ERP systems. Dynamic SLA-Based Prioritization: Calculates order priority scores using urgency, customer tier, order value, and profit margin. Intelligent SLA Monitoring: Identifies SLA breaches and automatically flags them for immediate handling. Automated Task Creation: Generates urgent picking tasks and alerts warehouse managers in real-time. Smart Scheduling: Optimizes picking and dispatch timelines based on urgency and capacity. Seamless ERP & TMS Integration: Updates order statuses and schedules dispatches automatically. Operational Transparency: Sends end-of-cycle summary reports via email to operations teams. --- Workflow Process Schedule Trigger Runs every 15 minutes to ensure orders are frequently evaluated. Maintains real-time responsiveness without overloading systems. Fetch Pending Orders Retrieves all orders in pending or confirmed state from ERP API. Configurable limit (e.g., 100 orders per run) for controlled processing. Fetch Customer SLA Data Collects SLA tiers, delivery timeframes, and customer-specific priorities from ERP or CRM API. Supports dynamic customer segmentation (Gold, Silver, Bronze tiers). Calculate Priority Scores Assigns weighted priority scores based on multiple criteria: Urgency: 40% Tier: 30% Order Value: 20% Profit Margin: 10% Produces a composite score used for sorting and scheduling. Check if SLA Critical Detects if any order is close to or past SLA deadlines. Routes SLA-breached orders for immediate escalation. Create Urgent Picking Task Generates warehouse picking tasks for critical orders. Assigns to senior pickers or rapid response teams. Alert Warehouse Manager Sends instant SMS and email alerts for SLA-critical or high-priority orders. Ensures immediate managerial attention. Batch Normal Orders Groups non-critical orders into batches of 10 for optimized processing. Reduces load on WMS while maintaining steady throughput. Generate Picking Schedule Creates smart picking schedules based on urgency: High Priority: Same-day Normal: Within 1 day Low: Within 2–3 days Create Bulk Picking Tasks Pushes picking tasks into WMS (Warehouse Management System). Uses auto-assignment and route optimization logic. Generate Dispatch Schedule Builds dispatch timelines according to delivery method: Express, Priority, or Standard. Syncs with transport capacity data. Schedule Dispatches in TMS Sends dispatch requests to TMS (Transport Management System). Books carriers and reserves capacity for each batch. Update Order Statuses Updates ERP with new order statuses, schedules, and expected delivery dates. Maintains a unified view across systems. Generate Summary Report Creates a summary JSON report including: Total orders processed SLA-critical cases Dispatch breakdowns Next deadlines Send Summary Notification Emails the operations team with execution summary and performance metrics. Ensures team alignment and SLA visibility. --- Industries That Benefit This automation is especially valuable for: E-commerce & Retail: To prioritize same-day or express deliveries for VIP customers. Logistics & 3PL Providers: For meeting tight SLAs across multiple clients and delivery tiers. Manufacturing & B2B Distribution: Ensures high-value or contractual orders are prioritized. Pharma & Healthcare: Critical for time-sensitive and compliance-driven deliveries. Consumer Goods & FMCG: Helps manage high-volume dispatch with smart scheduling. --- Prerequisites ERP system with API access (e.g., SAP, Odoo, NetSuite). WMS and TMS integrations with order/task APIs. SMTP and SMS gateway credentials for notifications. n8n instance with HTTP, Function, Email, and Scheduler nodes installed. --- Modification Options Customize priority scoring weights per business type. Integrate AI for predictive SLA breach forecasting. Add Slack/Teams channels for real-time operational alerts. Implement escalation routing for unassigned urgent tasks. Extend reports to include OTIF (On-Time-In-Full) metrics. --- Explore More AI-Powered Workflows: Contact us for customized supply chain and order management automation.

Oneclick AI SquadBy Oneclick AI Squad
149

Create Dev.to articles with OpenAI/Gemini - AI-generated content with images

AI Blog Publisher Workflow for Dev.to Turn a simple idea into a complete blog article with images, ready to publish — fully automated. How It Works This workflow takes a single input (your article idea) and transforms it into a polished blog post without manual effort. It begins with a topic, entered directly in the Set node. For more automation, you can connect it to Google Sheets, a webhook, or even a chatbot that collects ideas from you or your team. From there, the workflow does all the heavy lifting: The AI creates a structured plan for the article, including outline, section goals, and image suggestions. Image prompts are generated and sent to Gemini (or ChatGPT), which returns high-quality visuals that match the content. These are uploaded to your Cloudinary account so they’re instantly available online. The article is written in clean Markdown by AI, weaving text and images together in a natural way. Finally, the post is automatically published as a draft on Dev.to (or another platform of your choice, such as Medium, WordPress, or Ghost). Instead of dealing with multiple tools or outsourcing to a copywriter, this workflow handles the entire pipeline — from idea to draft — in one seamless process. Setup Steps Getting started takes only a few minutes: Connect your OpenAI account for the AI writer and planner. Add your Dev.to API key so the workflow can publish drafts. Provide your Cloudinary account name and set up an unsigned upload preset for hosting images. (Optional) Add your Gemini API key, or switch to ChatGPT for image generation. Enter your first idea into the Set input data/credentials node, then run the workflow manually, on a schedule (Cron), or automatically via Google Sheets or a webhook. Once configured, the workflow runs on autopilot — generating, illustrating, and publishing content without your input. What You Get Think of it as having a 24/7 content team working in the background. Complete blog articles, written in a professional and natural tone. Images that fit directly into the text, giving your content visual appeal. Ready-to-publish drafts delivered straight to Dev.to (or your chosen platform). A modular workflow that you can easily extend — whether you want new inputs (e.g. Slack, chat), new outputs (e.g. Medium, WordPress), or new AI models. This isn’t just a template. It’s a fully operational content engine you can plug into your business. Results You Can Expect Publishing consistently online builds trust, visibility, and authority. With this workflow, you’ll: Maintain a strong presence with regular articles — even when you’re not writing. Increase conversions by showing up more often in searches and recommendations. Build credibility by consistently sharing insights and solutions in your niche. Save money and time, replacing the need for a copywriter with scalable AI-driven automation. Who This Is For Developers who want to showcase projects without spending hours writing. Marketers looking to scale content strategies without hiring writers. Agencies and SaaS teams who need regular publishing for SEO and community presence. Solopreneurs who want their personal brand to grow online while they focus on building their product. With this workflow, your blog becomes fully automated. All you need is an idea — the system takes care of everything else.

LukaszBBy LukaszB
105

Slack auto translator (JA ⇄ EN) with GPT-4o-mini

🧠 How it works This workflow enables automatic translation in Slack using n8n and OpenAI. When a user types /trans followed by text, n8n detects the language and replies with the translated version via Slack. ⚙️ Features Detects the input language automatically Translates between Japanese ↔ English using GPT-4o-mini (temperature 0.2 for stability) Sends a quick “Translating...” acknowledgement to avoid Slack’s 3s timeout Posts the translated text back to Slack (public or private selectable) Supports overrides like en: こんにちは or ja: hello 💡 Perfect for Global teams communicating in Japanese and English Developers learning how to connect Slack + OpenAI + n8n 🧩 Notes Use sticky notes inside the workflow for setup details. Duplicate and modify it to support mentions, group messages, or other language pairs.

Tomohiro GotoBy Tomohiro Goto
102
All templates loaded