48 templates found
Category:
Author:
Sort:

AI-powered WhatsApp chatbot for text, voice, images, and PDF with RAG

Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }

NovaNodeBy NovaNode
157516

Create & upload AI-generated ASMR YouTube Shorts with Seedance, Fal AI, and GPT-4

//ASMR AI Workflow Who is this for? Content Creators, YouTube Automation Enthusiasts, and AI Hobbyists looking to autonomously generate and publish unique, satisfying ASMR-style YouTube Shorts without manual effort. What problem does this solve? This workflow solves the creative bottleneck and time-consuming nature of daily content creation. It fully automates the entire production pipeline, from brainstorming trendy ideas to publishing a finished video, turning your n8n instance into a 24/7 content factory. What this workflow does Two-Stage AI Ideation & Planning: Uses an initial AI agent to brainstorm a short, viral ASMR concept based on current trends. A second "Planning" AI agent then takes this concept and expands it into a detailed, structured production plan, complete with a viral-optimized caption, hashtags, and descriptions for the environment and sound. Multi-Modal Asset Generation: Video: Feeds detailed scene prompts to the ByteDance Seedance text-to-video model (via Wavespeed AI) to generate high-quality video clips. Audio: Simultaneously calls the Fal AI text-to-audio model to create custom, soothing ASMR sound effects that match the video's theme. Assembly: Automatically sequences the video clips and sound into a single, cohesive final video file using an FFMPEG API call. Closed-Loop Publishing & Logging: Logging: Initially logs the new idea to a Google Sheet with a status of "In Progress". Publishing: Automatically uploads the final, assembled video directly to your YouTube channel, setting the title and description from the AI's plan. Updating: Finds the original row in the Google Sheet and updates its status to "Done", adding a direct link to the newly published YouTube video. Notifications: Sends real-time alerts to Telegram and/or Gmail with the video title and link, confirming the successful publication. Setup Credentials: You will need to create credentials in your n8n instance for the following services: OpenAI API Wavespeed AI API (for Seedance) Fal AI API Google OAuth Credential (enable YouTube Data API v3 and Google Sheets API in your Google Cloud Project) Telegram Bot Credential (Optional) Gmail OAuth Credential Configuration: This is an advanced workflow. The initial setup should take approximately 15-20 minutes. Google Sheet: Create a Google Sheet with these columns: idea, caption, productionstatus, youtubeurl. Add the Sheet ID to the Google Sheets nodes in the workflow. Node Configuration: In the Telegram Notification node, enter your own Chat ID. In the Gmail Notification node, update the recipient email address. Activate: Once configured, save and set the workflow to "Active" to let it run on its schedule. How to customize Creative Direction: To change the style or theme of the videos (e.g., from kinetic sand to soap cutting), simply edit the systemMessage in the "2. Enrich Idea into Plan" and "Prompts AI Agent" nodes. Initial Ideas: To influence the AI's starting concepts, modify the prompt in the "1. Generate Trendy Idea" node. Video & Sound: To change the video duration or sound style, adjust the parameters in the "Create Clips" and "Create Sounds" nodes. Notifications: Add or remove notification channels (like Slack or Discord) after the "Upload to YouTube" node.

Bilel ArouaBy Bilel Aroua
145319

Local chatbot with retrieval augmented generation (RAG)

Build a 100% local RAG with n8n, Ollama and Qdrant. This agent uses a semantic database (Qdrant) to answer questions about PDF files. Tutorial Click here to view the YouTube Tutorial How it works Build a chatbot that answers based on documents you provide it (Retrieval Augmented Generation). You can upload as many PDF files as you want to the Qdrant database. The chatbot will use its retrieval tool to fetch the chunks and use them to answer questions. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. How to use it First run the "Data Ingestion" part and upload as many PDF files as you want Run the Chatbot and start asking questions about the documents you uploaded

Thomas JanssenBy Thomas Janssen
97400

Automate & publish video ad campaigns with NanoBanana, Seedream, GPT-4o, Veo 3

💥 Automate video ads with NanoBanana, Seedream 4, ChatGPT Image and Veo 3 Who is this for? This template is designed for marketers, content creators, and e-commerce brands who want to automate the creation of professional ad videos at scale. It’s ideal for teams looking to generate consistent, high-quality video ads for social media without spending hours on manual design, editing, or publishing. What problem is this workflow solving? / Use case Creating video ads usually requires multiple tools and a lot of time: writing scripts, designing product visuals, editing videos, and publishing them across platforms. This workflow automates the entire pipeline — from idea to ready-to-publish ad video — ensuring brands can quickly test campaigns and boost engagement without production delays. What this workflow does Generates ad ideas from Telegram input (text + product image). Creates product visuals using multiple AI image engines: 🌊 Seedream 4.0 (realistic visuals) 🍌 NanoBanana (image editing & enhancement) 🤖 ChatGPT Image / GPT-4o (creative variations) Produces cinematic video ads with Veo 3 based on AI-generated scripts. Merges multiple short clips into a polished final ad. Publishes automatically to multiple platforms (TikTok, Instagram, LinkedIn, X, Threads, Facebook, Pinterest, Bluesky, YouTube) via Blotato. Stores metadata and results in Google Sheets & Google Drive for easy tracking. Notifies you via Telegram with the video link and copy. Setup Connect your accounts in n8n: Telegram API (for input and notifications) Google Drive + Google Sheets (storage & tracking) Kie AI API (Seedream + Veo 3) Fal.ai API (NanoBanana + video merging) OpenAI (for script and prompt generation) Blotato API (for social publishing) Prepare a Google Sheet with brand info and settings (product name, category, features, offer, website URL). Deploy the workflow and connect your Telegram bot to start sending ad ideas (photo + caption). Run the workflow — it will automatically generate images, create videos, and publish to your chosen channels. How to customize this workflow to your needs Brand customization: Adjust the Google Sheet values to reflect your brand’s offers and product features. Platforms: Enable/disable specific Blotato nodes depending on which platforms you want to publish to. Video style: Edit the AI agent’s system prompt to control tone, format, and transitions (cinematic, playful, modern, etc.). Notifications: Adapt Telegram nodes to send updates to different team members or channels. Storage: Change the Google Drive folder IDs to store generated videos and images in your preferred location. This workflow lets you go from idea → images → cinematic ad video → auto-published content in minutes, fully automated. --- 📄 🎥 Watch This Tutorial: Step by Step --- 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube

Dr. FirasBy Dr. Firas
11531

🗼 AI powered supply chain control tower with BigQuery and GPT-4o

Tags: Supply Chain, Logistics, Control Tower Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We design tools to help companies improve their logistics processes using data analytics, AI, and automation—to reduce costs and minimize environmental impact. > Let’s use N8N to build smarter and more sustainable supply chains! 📬 For business inquiries, you can add me on LinkedIn Who is this template for? This workflow template is designed for logistics operations that need a monitoring solution for their distribution chains. Connected to your Transportation Management Systems, this AI agent can answer any question about the shipments handled by your distribution teams. How does it work? The workflow is connected to a Google BigQuery table that stores outbound order data (customer deliveries). Here’s what the AI agent does: 🤔 Receives a user question via chat. 🧠 Understands the request and generates the correct SQL query. ✅ Executes the SQL query using a BigQuery node. 💬 Responds to the user in plain English. [](https://www.loom.com/share/50271f9d50214d7184830985497a75ec?sid=d0c410dc-29f1-488f-b89a-4011de0ded07) Thanks to the chat memory, users can ask follow-up questions to dive deeper into the data. What do I need to get started? This workflow requires no advanced programming skills. You’ll need: A Google BigQuery account with an SQL table storing transactional records. An OpenAI API key (GPT-4o) for the chat model. Next Steps Follow the sticky notes in the workflow to configure each node and start using AI to support your supply chain operations. [](https://www.loom.com/share/50271f9d50214d7184830985497a75ec?sid=d0c410dc-29f1-488f-b89a-4011de0ded07) 🎥 Watch My Tutorial 🚀 Curious how N8N can transform your logistics operations? Notes The chat trigger can easily be replaced with Teams, Telegram, or Slack for a better user experience. You can also connect this to a customer chat window using a webhook. This workflow was built using N8N version 1.82.1 Submitted: March 24, 2025

Samir SaciBy Samir Saci
7689

CoinMarketCap telegram price bot

Get real-time cryptocurrency prices directly in Telegram! This workflow integrates CoinMarketCap API with Telegram, allowing users to request live crypto prices simply by sending a message to the bot. Ideal for crypto traders, analysts, and enthusiasts who need quick and easy access to market data. How It Works A Telegram bot listens for user input (e.g., "BTC" for Bitcoin). The workflow sends a request to the CoinMarketCap API to fetch the latest price. The response is processed using an AI-powered language model (GPT-4o-mini) for structured messaging. The workflow logs session data using a memory buffer for better response tracking. The latest price is sent back to the user via Telegram. Set Up Steps Create a Telegram Bot Use @BotFather on Telegram to create a bot and obtain an API token. Get a CoinMarketCap API Key Sign up at CoinMarketCap and retrieve your API key. Configure API Credentials in n8n Add the CoinMarketCap API key under HTTP Header Auth. Add your Telegram bot token under Telegram API credentials. Deploy and Test Send a message (e.g., "BTC") to your Telegram bot and receive live price updates instantly! Automate your crypto price tracking today with this powerful Telegram bot!

Don Jayamaha JrBy Don Jayamaha Jr
6468

Automatic Gmail categorization and labeling with AI

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? If your inbox is full of unread emails, this workflow is for you. Instead of reading through them one by one, let AI do the sorting. It reads your emails and flags only what needs action. What does it solve? This workflow reads your unread Gmail emails and uses AI to decide what’s important — and what’s not. It labels emails that need your attention, identifies receipts, and trashes everything else. No more manual reading. Just an inbox that uses AI to take care of itself. How it works Every hour, the workflow runs automatically. It searches for unread emails in your Gmail inbox. For each email: It extracts the content and sends it to OpenAI. The AI returns one of four labels: Action, Receipt, Informational or Spam. Based on the label: Emails are marked with the appropriate label. Or moved to trash it is spam. It marks the email as read once processed. How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail labels: In your Gmail account, create these labels exactly as written: Action, Receipt, and Informational The workflow will apply these labels based on AI classification. How to customize this workflow to your needs Change the AI prompt to detect more types of emails like Meeting or Newsletter. Add more branches to the Switch node to apply custom logic. Change the schedule to fit your workflow. By default, it runs every hour, but you can update this in the Schedule Trigger node.

Matt Chong | n8n CreatorBy Matt Chong | n8n Creator
5178

Telegram bot inline keyboard with dynamic menus & rating system

🤖 Telegram Bot with Dynamic Menus & Rating System What It Does This n8n workflow creates an interactive Telegram bot with: Dynamic inline keyboards that respond to user clicks 5-star rating system for collecting feedback Personalized responses using the user's actual name Multi-level menu navigation (Main → Settings → Profile, etc.) Real-time message updates when buttons are clicked How It Works Receives messages via Telegram webhook trigger node Extracts user data (name, ID, message type) Builds dynamic menus based on user actions Sends/updates messages with inline keyboards Handles button clicks without page refresh 🚀 Setup Instructions Get Your Bot Credentials Configure Workflow Open "Set Bot Token" node Replace token with yours Save and activate workflow (Active) Test Your Bot Message your bot on Telegram Click the buttons to navigate menus Try the rating system on Feature 1 🎨 Customization Guide Add New Menu Items In the "Prepare Response" Function node, add new cases: case 'your_feature': responseText = 'Your feature description'; keyboard = [ [{ text: '🎯 Button 1', callback_data: 'action1' }], [{ text: '🔙 Back', callback_data: 'main' }] ]; break; Modify Rating Options Change star buttons to numbers or emojis: // Current: ⭐⭐⭐ // Alternative: 1️⃣ 2️⃣ 3️⃣ or 👎 👍 Change Bot Responses Edit responseText for message content Modify keyboard arrays for button layout Add HTML formatting: <b>bold</b>, <i>italic</i> 💡 ++Key Features Demonstrated++ HTTP Request workaround for dynamic keyboards (n8n Telegram node limitation) Callback query handling to prevent loading animations Message editing vs sending new messages User data extraction from Telegram API Switch-case menu routing for scalable navigation ⚠️ ++Important Notes++ Limitation: n8n's native Telegram node doesn't support dynamic inline keyboards, this is why need to use HTTP nodes. Solution demonstrated: Use HTTP Request node with Telegram Bot API directly

Ruslan ElishevBy Ruslan Elishev
3352

Create e-mail responses with Fastmail and OpenAI

Workflow Description: This n8n workflow automates the drafting of email replies for Fastmail using OpenAI's GPT-4 model. Here’s the overall process: Email Monitoring: The workflow continuously monitors a specified IMAP inbox for new, unread emails. Email Data Extraction: When a new email is detected, it extracts relevant details such as the sender, subject, email body, and metadata. AI Response Generation: The extracted email content is sent to OpenAI's GPT-4, which generates a personalized draft response. Get Fastmail Session and Mailbox IDs: Connects to the Fastmail API to retrieve necessary session details and mailbox IDs. Draft Identification: Identifies the "Drafts" folder in the mailbox. Draft Preparation: Compiles all the necessary information to create the draft, including the generated response, original email details, and specified recipient. Draft Uploading: Uploads the prepared draft email to the "Drafts" folder in the Fastmail mailbox. Prerequisites: IMAP Email Account: You need to configure an IMAP email account in n8n to monitor incoming emails. Fastmail API Credentials: A Fastmail account with JMAP API enabled. You should set up HTTP Header authentication in n8n with your Fastmail API credentials. OpenAI API Key: An API key from OpenAI to access GPT-4. Make sure to configure the OpenAI credentials in n8n. Configuration Steps: Email Trigger (IMAP) Node: Provide your email server settings and credentials to monitor emails. HTTP Request Nodes for Fastmail: Set up HTTP Header authentication in n8n using your Fastmail API credentials. Replace the httpHeaderAuth credential IDs with your configured credential IDs. OpenAI Node: Configure the OpenAI API key in n8n. Replace the openAiApi credential ID with your configured credential ID. By following these steps and setting up the necessary credentials, you can seamlessly automate the creation of email drafts in response to new emails using AI-generated content. This workflow helps improve productivity and ensures timely, personalized communication.

VitaliBy Vitali
3298

Search and download torrents using transmission-daemon

Ok google download "movie name" I develop this automation to improve my quality of life in handling torrents in my media-center. Goal Automate the search operations of a movie based on its name and trigger a download using your transmission-daemon. Setup Prerequisite Transmission daemon up and running and its authentication method N8N configured self-hosted or with the possibility to add npm package better with docker-compose.yaml Telegram bot credential [optional] Configuration Create a folder where your docker-compose.yaml belongs n8n_dir and proceed in installing the node package. bash cd ~/n8n_dir npm i torrent-search-api Configuring your docker-compose.yaml file this way. You must include all the dependencies of torrent-search-api. This will let you run the new torrent search node presented in this workflow. version: '3.3' services: n8n: container_name: n8n ports: '5678:5678' restart: always volumes: '~/n8n_dir/.n8n:/home/node/.n8n' '~/n8ndir/nodemodules/@tootallnate:/usr/local/lib/node_modules/@tootallnate' '~/n8ndir/nodemodules/accepts:/usr/local/lib/node_modules/accepts' '~/n8ndir/nodemodules/agent-base:/usr/local/lib/node_modules/agent-base' '~/n8ndir/nodemodules/ajv:/usr/local/lib/node_modules/ajv' '~/n8ndir/nodemodules/ansi-styles:/usr/local/lib/node_modules/ansi-styles' '~/n8ndir/nodemodules/asn1:/usr/local/lib/node_modules/asn1' '~/n8ndir/nodemodules/assert:/usr/local/lib/node_modules/assert' '~/n8ndir/nodemodules/assert-plus:/usr/local/lib/node_modules/assert-plus' '~/n8ndir/nodemodules/ast-types:/usr/local/lib/node_modules/ast-types' '~/n8ndir/nodemodules/asynckit:/usr/local/lib/node_modules/asynckit' '~/n8ndir/nodemodules/aws-sign2:/usr/local/lib/node_modules/aws-sign2' '~/n8ndir/nodemodules/aws4:/usr/local/lib/node_modules/aws4' '~/n8ndir/nodemodules/base64-js:/usr/local/lib/node_modules/base64-js' '~/n8ndir/nodemodules/batch:/usr/local/lib/node_modules/batch' '~/n8ndir/nodemodules/bcrypt-pbkdf:/usr/local/lib/node_modules/bcrypt-pbkdf' '~/n8ndir/nodemodules/bluebird:/usr/local/lib/node_modules/bluebird' '~/n8ndir/nodemodules/boolbase:/usr/local/lib/node_modules/boolbase' '~/n8ndir/nodemodules/brotli:/usr/local/lib/node_modules/brotli' '~/n8ndir/nodemodules/bytes:/usr/local/lib/node_modules/bytes' '~/n8ndir/nodemodules/caseless:/usr/local/lib/node_modules/caseless' '~/n8ndir/nodemodules/chalk:/usr/local/lib/node_modules/chalk' '~/n8ndir/nodemodules/cheerio:/usr/local/lib/node_modules/cheerio' '~/n8ndir/nodemodules/cloudscraper:/usr/local/lib/node_modules/cloudscraper' '~/n8ndir/nodemodules/co:/usr/local/lib/node_modules/co' '~/n8ndir/nodemodules/color-convert:/usr/local/lib/node_modules/color-convert' '~/n8ndir/nodemodules/color-name:/usr/local/lib/node_modules/color-name' '~/n8ndir/nodemodules/combined-stream:/usr/local/lib/node_modules/combined-stream' '~/n8ndir/nodemodules/component-emitter:/usr/local/lib/node_modules/component-emitter' '~/n8ndir/nodemodules/content-disposition:/usr/local/lib/node_modules/content-disposition' '~/n8ndir/nodemodules/content-type:/usr/local/lib/node_modules/content-type' '~/n8ndir/nodemodules/cookiejar:/usr/local/lib/node_modules/cookiejar' '~/n8ndir/nodemodules/core-util-is:/usr/local/lib/node_modules/core-util-is' '~/n8ndir/nodemodules/css-select:/usr/local/lib/node_modules/css-select' '~/n8ndir/nodemodules/css-what:/usr/local/lib/node_modules/css-what' '~/n8ndir/nodemodules/dashdash:/usr/local/lib/node_modules/dashdash' '~/n8ndir/nodemodules/data-uri-to-buffer:/usr/local/lib/node_modules/data-uri-to-buffer' '~/n8ndir/nodemodules/debug:/usr/local/lib/node_modules/debug' '~/n8ndir/nodemodules/deep-is:/usr/local/lib/node_modules/deep-is' '~/n8ndir/nodemodules/degenerator:/usr/local/lib/node_modules/degenerator' '~/n8ndir/nodemodules/delayed-stream:/usr/local/lib/node_modules/delayed-stream' '~/n8ndir/nodemodules/delegates:/usr/local/lib/node_modules/delegates' '~/n8ndir/nodemodules/depd:/usr/local/lib/node_modules/depd' '~/n8ndir/nodemodules/destroy:/usr/local/lib/node_modules/destroy' '~/n8ndir/nodemodules/dom-serializer:/usr/local/lib/node_modules/dom-serializer' '~/n8ndir/nodemodules/domelementtype:/usr/local/lib/node_modules/domelementtype' '~/n8ndir/nodemodules/domhandler:/usr/local/lib/node_modules/domhandler' '~/n8ndir/nodemodules/domutils:/usr/local/lib/node_modules/domutils' '~/n8ndir/nodemodules/ecc-jsbn:/usr/local/lib/node_modules/ecc-jsbn' '~/n8ndir/nodemodules/ee-first:/usr/local/lib/node_modules/ee-first' '~/n8ndir/nodemodules/emitter-component:/usr/local/lib/node_modules/emitter-component' '~/n8ndir/nodemodules/enqueue:/usr/local/lib/node_modules/enqueue' '~/n8ndir/nodemodules/enstore:/usr/local/lib/node_modules/enstore' '~/n8ndir/nodemodules/entities:/usr/local/lib/node_modules/entities' '~/n8ndir/nodemodules/error-inject:/usr/local/lib/node_modules/error-inject' '~/n8ndir/nodemodules/escape-html:/usr/local/lib/node_modules/escape-html' '~/n8ndir/nodemodules/escape-string-regexp:/usr/local/lib/node_modules/escape-string-regexp' '~/n8ndir/nodemodules/escodegen:/usr/local/lib/node_modules/escodegen' '~/n8ndir/nodemodules/esprima:/usr/local/lib/node_modules/esprima' '~/n8ndir/nodemodules/estraverse:/usr/local/lib/node_modules/estraverse' '~/n8ndir/nodemodules/esutils:/usr/local/lib/node_modules/esutils' '~/n8ndir/nodemodules/extend:/usr/local/lib/node_modules/extend' '~/n8ndir/nodemodules/extsprintf:/usr/local/lib/node_modules/extsprintf' '~/n8ndir/nodemodules/fast-deep-equal:/usr/local/lib/node_modules/fast-deep-equal' '~/n8ndir/nodemodules/fast-json-stable-stringify:/usr/local/lib/node_modules/fast-json-stable-stringify' '~/n8ndir/nodemodules/fast-levenshtein:/usr/local/lib/node_modules/fast-levenshtein' '~/n8ndir/nodemodules/file-uri-to-path:/usr/local/lib/node_modules/file-uri-to-path' '~/n8ndir/nodemodules/forever-agent:/usr/local/lib/node_modules/forever-agent' '~/n8ndir/nodemodules/form-data:/usr/local/lib/node_modules/form-data' '~/n8ndir/nodemodules/format-parser:/usr/local/lib/node_modules/format-parser' '~/n8ndir/nodemodules/formidable:/usr/local/lib/node_modules/formidable' '~/n8ndir/nodemodules/fs-extra:/usr/local/lib/node_modules/fs-extra' '~/n8ndir/nodemodules/ftp:/usr/local/lib/node_modules/ftp' '~/n8ndir/nodemodules/get-uri:/usr/local/lib/node_modules/get-uri' '~/n8ndir/nodemodules/getpass:/usr/local/lib/node_modules/getpass' '~/n8ndir/nodemodules/graceful-fs:/usr/local/lib/node_modules/graceful-fs' '~/n8ndir/nodemodules/har-schema:/usr/local/lib/node_modules/har-schema' '~/n8ndir/nodemodules/har-validator:/usr/local/lib/node_modules/har-validator' '~/n8ndir/nodemodules/has-flag:/usr/local/lib/node_modules/has-flag' '~/n8ndir/nodemodules/htmlparser2:/usr/local/lib/node_modules/htmlparser2' '~/n8ndir/nodemodules/http-context:/usr/local/lib/node_modules/http-context' '~/n8ndir/nodemodules/http-errors:/usr/local/lib/node_modules/http-errors' '~/n8ndir/nodemodules/http-incoming:/usr/local/lib/node_modules/http-incoming' '~/n8ndir/nodemodules/http-outgoing:/usr/local/lib/node_modules/http-outgoing' '~/n8ndir/nodemodules/http-proxy-agent:/usr/local/lib/node_modules/http-proxy-agent' '~/n8ndir/nodemodules/http-signature:/usr/local/lib/node_modules/http-signature' '~/n8ndir/nodemodules/https-proxy-agent:/usr/local/lib/node_modules/https-proxy-agent' '~/n8ndir/nodemodules/iconv-lite:/usr/local/lib/node_modules/iconv-lite' '~/n8ndir/nodemodules/inherits:/usr/local/lib/node_modules/inherits' '~/n8ndir/nodemodules/ip:/usr/local/lib/node_modules/ip' '~/n8ndir/nodemodules/is-browser:/usr/local/lib/node_modules/is-browser' '~/n8ndir/nodemodules/is-typedarray:/usr/local/lib/node_modules/is-typedarray' '~/n8ndir/nodemodules/is-url:/usr/local/lib/node_modules/is-url' '~/n8ndir/nodemodules/isarray:/usr/local/lib/node_modules/isarray' '~/n8ndir/nodemodules/isobject:/usr/local/lib/node_modules/isobject' '~/n8ndir/nodemodules/isstream:/usr/local/lib/node_modules/isstream' '~/n8ndir/nodemodules/jsbn:/usr/local/lib/node_modules/jsbn' '~/n8ndir/nodemodules/json-schema:/usr/local/lib/node_modules/json-schema' '~/n8ndir/nodemodules/json-schema-traverse:/usr/local/lib/node_modules/json-schema-traverse' '~/n8ndir/nodemodules/json-stringify-safe:/usr/local/lib/node_modules/json-stringify-safe' '~/n8ndir/nodemodules/jsonfile:/usr/local/lib/node_modules/jsonfile' '~/n8ndir/nodemodules/jsprim:/usr/local/lib/node_modules/jsprim' '~/n8ndir/nodemodules/koa-is-json:/usr/local/lib/node_modules/koa-is-json' '~/n8ndir/nodemodules/levn:/usr/local/lib/node_modules/levn' '~/n8ndir/nodemodules/lodash:/usr/local/lib/node_modules/lodash' '~/n8ndir/nodemodules/lodash.assignin:/usr/local/lib/node_modules/lodash.assignin' '~/n8ndir/nodemodules/lodash.bind:/usr/local/lib/node_modules/lodash.bind' '~/n8ndir/nodemodules/lodash.defaults:/usr/local/lib/node_modules/lodash.defaults' '~/n8ndir/nodemodules/lodash.filter:/usr/local/lib/node_modules/lodash.filter' '~/n8ndir/nodemodules/lodash.flatten:/usr/local/lib/node_modules/lodash.flatten' '~/n8ndir/nodemodules/lodash.foreach:/usr/local/lib/node_modules/lodash.foreach' '~/n8ndir/nodemodules/lodash.map:/usr/local/lib/node_modules/lodash.map' '~/n8ndir/nodemodules/lodash.merge:/usr/local/lib/node_modules/lodash.merge' '~/n8ndir/nodemodules/lodash.pick:/usr/local/lib/node_modules/lodash.pick' '~/n8ndir/nodemodules/lodash.reduce:/usr/local/lib/node_modules/lodash.reduce' '~/n8ndir/nodemodules/lodash.reject:/usr/local/lib/node_modules/lodash.reject' '~/n8ndir/nodemodules/lodash.some:/usr/local/lib/node_modules/lodash.some' '~/n8ndir/nodemodules/lru-cache:/usr/local/lib/node_modules/lru-cache' '~/n8ndir/nodemodules/media-typer:/usr/local/lib/node_modules/media-typer' '~/n8ndir/nodemodules/methods:/usr/local/lib/node_modules/methods' '~/n8ndir/nodemodules/mime:/usr/local/lib/node_modules/mime' '~/n8ndir/nodemodules/mime-db:/usr/local/lib/node_modules/mime-db' '~/n8ndir/nodemodules/mime-types:/usr/local/lib/node_modules/mime-types' '~/n8ndir/nodemodules/monotonic-timestamp:/usr/local/lib/node_modules/monotonic-timestamp' '~/n8ndir/nodemodules/ms:/usr/local/lib/node_modules/ms' '~/n8ndir/nodemodules/negotiator:/usr/local/lib/node_modules/negotiator' '~/n8ndir/nodemodules/netmask:/usr/local/lib/node_modules/netmask' '~/n8ndir/nodemodules/nth-check:/usr/local/lib/node_modules/nth-check' '~/n8ndir/nodemodules/oauth-sign:/usr/local/lib/node_modules/oauth-sign' '~/n8ndir/nodemodules/object-assign:/usr/local/lib/node_modules/object-assign' '~/n8ndir/nodemodules/on-finished:/usr/local/lib/node_modules/on-finished' '~/n8ndir/nodemodules/optionator:/usr/local/lib/node_modules/optionator' '~/n8ndir/nodemodules/pac-proxy-agent:/usr/local/lib/node_modules/pac-proxy-agent' '~/n8ndir/nodemodules/pac-resolver:/usr/local/lib/node_modules/pac-resolver' '~/n8ndir/nodemodules/parseurl:/usr/local/lib/node_modules/parseurl' '~/n8ndir/nodemodules/performance-now:/usr/local/lib/node_modules/performance-now' '~/n8ndir/nodemodules/prelude-ls:/usr/local/lib/node_modules/prelude-ls' '~/n8ndir/nodemodules/process-nextick-args:/usr/local/lib/node_modules/process-nextick-args' '~/n8ndir/nodemodules/promise-polyfill:/usr/local/lib/node_modules/promise-polyfill' '~/n8ndir/nodemodules/proxy-agent:/usr/local/lib/node_modules/proxy-agent' '~/n8ndir/nodemodules/proxy-from-env:/usr/local/lib/node_modules/proxy-from-env' '~/n8ndir/nodemodules/psl:/usr/local/lib/node_modules/psl' '~/n8ndir/nodemodules/punycode:/usr/local/lib/node_modules/punycode' '~/n8ndir/nodemodules/qs:/usr/local/lib/node_modules/qs' '~/n8ndir/nodemodules/querystring:/usr/local/lib/node_modules/querystring' '~/n8ndir/nodemodules/raw-body:/usr/local/lib/node_modules/raw-body' '~/n8ndir/nodemodules/readable-stream:/usr/local/lib/node_modules/readable-stream' '~/n8ndir/nodemodules/request:/usr/local/lib/node_modules/request' '~/n8ndir/nodemodules/request-promise:/usr/local/lib/node_modules/request-promise' '~/n8ndir/nodemodules/request-promise-core:/usr/local/lib/node_modules/request-promise-core' '~/n8ndir/nodemodules/request-x-ray:/usr/local/lib/node_modules/request-x-ray' '~/n8ndir/nodemodules/safe-buffer:/usr/local/lib/node_modules/safe-buffer' '~/n8ndir/nodemodules/safer-buffer:/usr/local/lib/node_modules/safer-buffer' '~/n8ndir/nodemodules/selectn:/usr/local/lib/node_modules/selectn' '~/n8ndir/nodemodules/setprototypeof:/usr/local/lib/node_modules/setprototypeof' '~/n8ndir/nodemodules/sliced:/usr/local/lib/node_modules/sliced' '~/n8ndir/nodemodules/smart-buffer:/usr/local/lib/node_modules/smart-buffer' '~/n8ndir/nodemodules/socks:/usr/local/lib/node_modules/socks' '~/n8ndir/nodemodules/socks-proxy-agent:/usr/local/lib/node_modules/socks-proxy-agent' '~/n8ndir/nodemodules/source-map:/usr/local/lib/node_modules/source-map' '~/n8ndir/nodemodules/sshpk:/usr/local/lib/node_modules/sshpk' '~/n8ndir/nodemodules/statuses:/usr/local/lib/node_modules/statuses' '~/n8ndir/nodemodules/stealthy-require:/usr/local/lib/node_modules/stealthy-require' '~/n8ndir/nodemodules/stream-to-string:/usr/local/lib/node_modules/stream-to-string' '~/n8ndir/nodemodules/string-format:/usr/local/lib/node_modules/string-format' '~/n8ndir/nodemodules/stringdecoder:/usr/local/lib/nodemodules/string_decoder' '~/n8ndir/nodemodules/superagent:/usr/local/lib/node_modules/superagent' '~/n8ndir/nodemodules/superagent-proxy:/usr/local/lib/node_modules/superagent-proxy' '~/n8ndir/nodemodules/supports-color:/usr/local/lib/node_modules/supports-color' '~/n8ndir/nodemodules/toidentifier:/usr/local/lib/node_modules/toidentifier' '~/n8ndir/nodemodules/torrent-search-api:/usr/local/lib/node_modules/torrent-search-api' '~/n8ndir/nodemodules/tough-cookie:/usr/local/lib/node_modules/tough-cookie' '~/n8ndir/nodemodules/tslib:/usr/local/lib/node_modules/tslib' '~/n8ndir/nodemodules/tunnel-agent:/usr/local/lib/node_modules/tunnel-agent' '~/n8ndir/nodemodules/tweetnacl:/usr/local/lib/node_modules/tweetnacl' '~/n8ndir/nodemodules/type-check:/usr/local/lib/node_modules/type-check' '~/n8ndir/nodemodules/type-is:/usr/local/lib/node_modules/type-is' '~/n8ndir/nodemodules/universalify:/usr/local/lib/node_modules/universalify' '~/n8ndir/nodemodules/unpipe:/usr/local/lib/node_modules/unpipe' '~/n8ndir/nodemodules/uri-js:/usr/local/lib/node_modules/uri-js' '~/n8ndir/nodemodules/util:/usr/local/lib/node_modules/util' '~/n8ndir/nodemodules/util-deprecate:/usr/local/lib/node_modules/util-deprecate' '~/n8ndir/nodemodules/uuid:/usr/local/lib/node_modules/uuid' '~/n8ndir/nodemodules/vary:/usr/local/lib/node_modules/vary' '~/n8ndir/nodemodules/verror:/usr/local/lib/node_modules/verror' '~/n8ndir/nodemodules/word-wrap:/usr/local/lib/node_modules/word-wrap' '~/n8ndir/nodemodules/wrap-fn:/usr/local/lib/node_modules/wrap-fn' '~/n8ndir/nodemodules/x-ray:/usr/local/lib/node_modules/x-ray' '~/n8ndir/nodemodules/x-ray-crawler:/usr/local/lib/node_modules/x-ray-crawler' '~/n8ndir/nodemodules/x-ray-parse:/usr/local/lib/node_modules/x-ray-parse' '~/n8ndir/nodemodules/x-ray-scraper:/usr/local/lib/node_modules/x-ray-scraper' '~/n8ndir/nodemodules/xregexp:/usr/local/lib/node_modules/xregexp' '~/n8ndir/nodemodules/yallist:/usr/local/lib/node_modules/yallist' '~/n8ndir/nodemodules/yieldly:/usr/local/lib/node_modules/yieldly' image: 'n8nio/n8n:latest-rpi' environment: N8NBASICAUTH_ACTIVE=true N8NBASICAUTH_USER=username N8NBASICAUTHPASSWORD=yoursecretn8npassword EXECUTIONSDATAPRUNE=true EXECUTIONSDATAMAX_AGE=120 EXECUTIONS_TIMEOUT=300 EXECUTIONSTIMEOUTMAX=500 GENERIC_TIMEZONE=Europe/Berlin NODEFUNCTIONALLOW_EXTERNAL=torrent-search-api Once configured this way run n8n and create a new workflow coping the one proposed. Configure workflow Transmission In order to send command to transmission you must validate the Basic Auth. To do so: open the Start download node and edit the Credentials. Perform the same operation choosing the new credentials also in node Start download new token. In this automation we call transmission twice due to a security protocol in transmission system that prevents single click commands to be triggered, performing the request twice bypasses this security mechanism. https://en.wikipedia.org/wiki/Cross-siterequestforgery We use the X-Transmission-Session-Id provided by the first request to authenticate the second request. Telegram In order to make the workflow work as expected you must create a telegram bot and configure the nodes (Torrent not found and Telegram1) to send your message once the workflow is complete. Here's an easy guide to follow https://docs.n8n.io/nodes/n8n-nodes-base.telegram/ In those nodes you also should configure the Chat ID, you may use your telegram username or use a bot to retrieve your id. You may chat with useridinfobot that sends you your id. Ok google automation Since right now we do not have a n8n client for mobile that can trigger automation using google assistant I decided to use an IFTTT automation to trigger the webhook. I connect my IFTTT account with google assistant and pick the trigger. Say a phrase with a text ingredient as in the picture below. And configure the trigger this way. scarica $ -&gt; download $ or metti in download $ -&gt; put in download $ or some other trigger you may want. Then configure your server to trigger the webhook of n8n. Conclusion In conclusion we provide a fully working automation that integrates in n8n a node library and provides an easy trigger to perform a complex operation. Security concern Giving the ability to trigger a download may be problematic for potential unwanted torrent malware download, so you may decide to authenticate the webhook request passing in the body another field with a shared token between the two endpoints. Moreover the torrent-search-api library and its dependencies have some vulnerability that you may want to avoid on your own media-center, this will hopefully be patched soon in a further release of the library. This is just an interesting proof of concept. Quality of the download You may want to introduce another block between torrent search and webhook trigger to search for a movie based on the words detected by google assistant, sometimes it misinterprets something and you may end up downloading potential copyrighted material. Please use this automation only for free and open source movies and music.

DangerBy Danger
3077

Smart Gmail cleaner with AI validator & Telegram alerts

Automatically clean up your Gmail inbox by deleting unwanted emails, validated by Gemini AI. Ideal for anyone tired of manual inbox cleanup, this workflow helps you save time while staying in control, with full transparency via Telegram alerts. How it works Scans Gmail inbox in adjustable 2-week batches Uses Gemini AI to decide if an email should be deleted or skipped Applies a label to skipped emails to avoid rechecking in future runs Deletes unwanted emails and sends a Telegram message with the AI's reasoning Also notifies on skipped emails, with explanation included Set up steps Connect your Gmail, Gemini AI, and Telegram accounts Adjust the AI baseline to control sensitivity (e.g. how strict the filtering should be) Set your batch range (default: last 2 weeks, adjustable) Define your Telegram chat/channel for notifications --- Note: Thanks to n8n's modular design, you can easily switch Gemini for another AI model (like OpenAI, Claude, etc.) or replace Telegram with Discord, Slack, or even email, no code changes needed, just swap the nodes.

ReyhanBy Reyhan
2799

Generate personalized marketing emails from Google Sheets with Llama AI

An AI-powered email marketing automation workflow that generates personalized marketing emails using data from Google Sheets and delivers them directly to clients. This workflow combines the power of AI content generation with spreadsheet-based campaign management for seamless email marketing automation. What's the Goal? Automatically pull marketing offer details from Google Sheets (Sheet 1) Fetch client information from Google Sheets (Sheet 2) Use AI to generate compelling, personalized marketing content Format emails with professional structure and personalization Send targeted marketing emails directly to clients Enable scalable email marketing campaigns with minimal manual effort By the end, you'll have a fully automated email marketing system that creates and sends personalized campaigns based on your spreadsheet data. Why Does It Matter? Manual email marketing is labor-intensive and lacks personalization at scale. Here's why this workflow is a game changer: Zero Manual Drafting: AI generates unique content for each recipient Data-Driven Personalization: Leverages spreadsheet data for targeted messaging Scalable Campaigns: Handle hundreds of clients with a single workflow execution Consistent Quality: AI ensures professional, engaging content every time Time Efficiency: Transform hours of work into minutes of automation Cost-Effective: Reduce marketing team workload while increasing output Think of it as your intelligent marketing assistant that creates personalized campaigns at enterprise scale. How It Works Here's the step-by-step process behind the automation: Step 1: Track Offer Updates Node: Track Offer Sheet Updates (Sheet 1) Function: Monitor Google Sheets for new marketing offers or updates Trigger: Automatically activates when new data is added to Sheet 1 Step 2: Generate Marketing Content Node: Generate Marketing Content with AI Function: Process offer details through AI model (Llama 3.2) Process: Creates compelling marketing copy based on offer parameters Step 3: Fetch Client Information Node: Fetch Client List (Sheet 2) Function: Retrieve client names and email addresses from Sheet 2 Data: Pulls clientname and clientemail for personalization Step 4: Content Personalization Node: Format Personalized Email Function: Combine AI-generated content with client-specific data Output: Creates personalized email for each recipient Step 5: Email Delivery Node: Send Marketing Email to Client Function: Deliver personalized emails directly to client inboxes Method: Uses Gmail integration for professional delivery How to Use the Workflow Prerequisites Google Sheets Setup: Create two sheets with the required column structure n8n Account: Access to n8n workflow platform Gmail API: Gmail account with API access configured AI Model Access: Llama 3.2 API credentials Importing the Workflow in n8n Step 1: Obtain the Workflow JSON Download the workflow file or copy the JSON code Ensure you have the complete workflow configuration Step 2: Access n8n Workflow Editor Log in to your n8n instance (Cloud or self-hosted) Navigate to the Workflows section Click "Add Workflow" to create a new workflow Step 3: Import the Workflow Option A: Import from Clipboard Click the three dots (⋯) in the top-right corner Select "Import from Clipboard" Paste the JSON code into the text box Click "Import" to load the workflow Option B: Import from File Click the three dots (⋯) in the top-right corner Select "Import from File" Choose the .json file from your computer Click "Open" to import the workflow Configuration Setup Google Sheets Integration Authenticate Google Sheets: Connect your Google account in n8n Configure Sheet 1: Set spreadsheet ID and range for marketing offers Configure Sheet 2: Set spreadsheet ID and range for client information AI Model Configuration Set API Credentials: Configure Llama 3.2 API key and endpoint Customize Prompts: Adjust AI prompts for your brand voice and style Set Content Parameters: Define content length, tone, and structure Gmail Integration Gmail API Setup: Enable Gmail API in Google Cloud Console OAuth Configuration: Set up OAuth credentials for email sending Sender Configuration: Configure sender name and email address Content Customization Email Templates: Customize email structure and branding Personalization Fields: Map spreadsheet columns to email variables Brand Guidelines: Set company colors, fonts, and messaging tone Workflow Execution Manual Execution Click "Execute Workflow" in the n8n interface Monitor execution progress through each node Review generated content and delivery status Automated Execution Set up triggers based on sheet updates Configure scheduling for regular campaign runs Enable webhook triggers for real-time processing Best Practices Data Management Keep spreadsheet data clean and formatted consistently Regular validation of email addresses in Sheet 2 Update offer details promptly in Sheet 1 Content Quality Review AI-generated content periodically Adjust prompts based on campaign performance Maintain consistent brand voice across campaigns Deliverability Monitor email bounce rates and engagement metrics Maintain clean email lists with valid addresses Follow email marketing best practices and regulations Performance Optimization Batch process large client lists for efficiency Monitor workflow execution times Implement error handling and retry mechanisms Troubleshooting Common Issues Authentication Errors: Verify API credentials and permissions Sheet Access: Ensure proper sharing permissions for Google Sheets Email Delivery: Check Gmail API quotas and sending limits AI Processing: Monitor API rate limits and response times Error Handling Implement retry logic for failed operations Set up notification systems for workflow failures Maintain backup data sources for critical campaigns Security Considerations Use environment variables for API keys and credentials Implement proper access controls for sensitive data Regular security audits of connected services Compliance with data protection regulations (GDPR, CAN-SPAM) Conclusion This Smart Email Marketing Generator transforms your marketing campaigns from manual, time-consuming tasks into automated, intelligent processes. By leveraging AI and spreadsheet data, you can create personalized, engaging campaigns that scale with your business needs while maintaining professional quality and consistency. The workflow represents a significant advancement in marketing automation, combining the accessibility of spreadsheet-based data management with the power of AI-driven content generation and automated delivery systems.

Oneclick AI SquadBy Oneclick AI Squad
1465