9 templates found
Category:
Author:
Sort:

Create or update a post in WordPress

No description available.

Harshil AgrawalBy Harshil Agrawal
11792

Forward filtered Gmail notifications to Telegram chat

This workflow automatically forwards incoming Gmail emails to a Telegram chat only if the email subject contains specific keywords (like "Urgent" or "Server Down"). The workflow extracts key details such as the sender, subject, and message body, and sends them as a formatted message to a specified Telegram chat. This is useful for real-time notifications, security alerts, or monitoring important emails directly from Telegram — filtering out unnecessary emails. Prerequisites: Before setting up the workflow, ensure the following: The Gmail API should be enabled. Create a bot using @BotFather and obtain the API key. Retrieve the telegram Chat ID (for personal messages or group messages). Set up OAuth2 for Gmail and use the Bot Token for Telegram. Customisation Options : Modify the subject keywords in the IF Node to change the filtering criteria. Customize how the email details appear in Telegram (bold subject, italic body, etc.). Extend the workflow to include email attachments in Telegram. Steps : Step 1: Gmail Trigger Node (On Message Received) Select "Gmail Trigger" and add it to the workflow. Authenticate with your Google Account. Set Trigger Event to "Message Received". (Optional) Add filters for specific senders, labels, or subjects. Click "Execute Node" to test the connection. Click "Save". Step 2: IF Node (Conditional Filtering) Add an "IF" Node after the Gmail Trigger. Configure the condition to check if the email subject contains specific keywords (e.g., "Urgent", "Server Down", "Alert"). If the condition is true, proceed to the next step. If false, you can stop or route it elsewhere (optional). Step 3: Telegram Node (Send Message Action) Click "Add Node" and search for Telegram. Select "Send Message" as the action. Authenticate using your Telegram Bot Token. Set the Chat ID (personal or group chat). Format the message using email details received from the email trigger node and set the message in text. Steps 4. Connect & Test the Workflow Link Gmail Trigger → if node → Telegram Send Message. Save and execute the workflow manually. Send a test email to your Gmail account. Verify if the email details appear in your Telegram chat. About the Creator, WeblineIndia This workflow is created by the Agentic business process automation developers at WeblineIndia. We build automation and AI-driven tools that make life easier for your team. If you’re looking to hire dedicated developers who can customize workflows around your business, we’re just a click away.

WeblineIndiaBy WeblineIndia
2821

Create a branded AI chatbot for websites with Flowise multi-agent chatflows

This workflow integrates Flowise Multi-Agent Chatflows into a custom-branded n8n chatbot, enabling real-time interaction between users and AI agents powered by large language models (LLMs). --- Key Advantages: ✅ Easy Integration with Flowise: Uses a low-code HTTP node to send user questions to Flowise's API (/api/v1/prediction/FLOWISE_ID) and receive intelligent responses. Supports multi-agent chatflows, allowing for complex, dynamic interactions. 🎨 Customizable Chatbot UI: Includes pre-built JavaScript for embedding the n8n chatbot into any website. Provides customization options such as welcome messages, branding, placeholder text, chat modes (e.g., popup or embedded), and language support. 🔐 Secure & Configurable: Authorization via Bearer token headers for Flowise API access. Clearly marked notes in the workflow for setting environment variables like FLOWISEURL and FLOWID. --- How It Works Chat Trigger: The workflow starts with the When chat message received node, which acts as a webhook to receive incoming chat messages from users. HTTP Request to Flowise: The received message is forwarded to the Flowise node, which sends a POST request to a Flowise API endpoint (https://FLOWISEURL/api/v1/prediction/FLOWISEID). The request includes the user's input as a JSON payload ({"question": "{{ $json.chatInput }}"}) and uses HTTP header authentication (e.g., Authorization: Bearer FLOWSIEAPI). Response Handling: The response from Flowise is passed to the Edit Fields node, which maps the output ($json.text) for further processing or display. Set Up Steps Configure Flowise Integration: Replace FLOWISEURL and FLOWISE_ID in the HTTP Request node with your Flowise instance URL and flow ID. Ensure the Authorization header is set correctly in the credentials (e.g., Bearer FLOWSIE_API). Embed n8n Chatbot: Use the provided JavaScript snippet in the sticky notes to embed the n8n chatbot on your website. Replace YOURPRODUCTIONWEBHOOK_URL with the webhook URL generated by the When chat message received node. Customize the chatbot's appearance and behavior (e.g., welcome messages, language, UI elements) using the createChat configuration options. Optional Branding: Adjust the sticky note examples to include branding details, such as custom messages, colors, or metadata for the chatbot. Activate Workflow: Toggle the workflow to "Active" in n8n and test the chat functionality end-to-end. --- Ideal Use Cases: Embedding branded AI assistants into websites. Connecting Flowise-powered agents with customer support chatbots. Creating dynamic, smart conversational flows with LLMs via n8n automation. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
2554

Build persistent chat memory with GPT-4o-mini and Qdrant vector database

🧠 Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. 🎯 What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. 🔑 Key Features Persistent Context Storage: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture: Handles unlimited conversations and users with consistent performance No Context Window Limitations: Effectively bypasses LLM token limits through intelligent retrieval 💡 Use Cases Customer Support Bots: Remember customer history, preferences, and previous issues Personal AI Assistants: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems: Build accumulated knowledge bases from user interactions Educational Tutors: Track student progress and adapt teaching based on history Enterprise Chatbots: Maintain context across departments and long-term projects 🛠️ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval 📋 Requirements OpenAI API Key: For embeddings and chat completions Qdrant Instance: Cloud or self-hosted vector database Cohere API Key: Optional, for enhanced retrieval accuracy n8n Instance: Version 1.0+ with LangChain nodes 🚀 Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! 📊 Performance Metrics Response Time: 2-3 seconds average Memory Recall Accuracy: 95%+ Token Usage: 50-70% reduction compared to full context inclusion Scalability: Tested with 100k+ stored conversations 💰 Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation 📖 Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant 🤝 Support Documentation: Full setup guide in the article above Community: Share your experiences and get help in n8n community forums Issues: Report bugs or request features on the workflow page --- Tags: AI LangChain VectorDatabase LongTermMemory RAG OpenAI Qdrant ChatBot MemorySystem ArtificialIntelligence

Einar César SantosBy Einar César Santos
1566

Enrich new Discourse members with Clearbit then notify in Slack

Who is this template for? This workflow template is designed for Sales and Customer Success professionals seeking alerts when potential high-value users, prospects, or existing customers register for a Discourse community. Leveraging Clearbit, it retrieves enriched data for the new member to assess their value. Example result in Slack How it works Each time a new member is created in Discourse, the workflow runs (powered by Discourse's native Webhooks feature). After filtering out popular private email accounts, we run the member's email through Clearbit to fetch available information on the member as well as their organization. If the enriched data meets certain criteria, we send a Slack message to a channel. This message has a few quick actions: Open LinkedIn profile and Email member Setup instructions Overview is below. Watch this 🎥 quick set up video for detailed instructions on how to get the template running, as well as how to customize it. Complete the Set up credentials step when you first open the workflow. You'll need a Discourse (admin user), Clearbit, and Slack account. Set up the Webhook in Discourse, linking the On new Discourse user Trigger with your Discourse community. Set the correct channel to send to in the Post message in channel step After testing your workflow, swap the Test URL to Production URL in Discourse and activate your workflow Template was created in n8n v1.29.1

Max TkaczBy Max Tkacz
1405

Generate educational social media carousels with GPT-4.1, Templated.io & Google Drive

🎯 Description Automatically generates, designs, stores, and logs complete Instagram carousel posts. It transforms a simple text prompt into a full post with copy, visuals, rendered images, Google Drive storage, and a record in Google Sheets. ⚙️ Use case / What it does This workflow enables creators, educators, or community managers to instantly produce polished, on-brand carousel assets for social media. It integrates OpenAI GPT-4.1, Pixabay, Templated.io, Google Drive, and Google Sheets into one continuous content-production chain. 💡 How it works 1️⃣ Form Trigger – Collects the user prompt via a simple web form. 2️⃣ OpenAI GPT-4.1 – Generates structured carousel JSON: titles, subtitles, topic, description, and visual keywords. 3️⃣ Code (Format content) – Parses the JSON output for downstream use. 4️⃣ Google Drive (Create Folder) – Creates a subfolder for the new carousel inside “RRSS”. 5️⃣ HTTP Request (Pixabay) – Searches for a relevant image using GPT’s visual suggestion. 6️⃣ Code (Get first result) – Extracts the top Pixabay result and image URL. 7️⃣ Templated.io – Fills the design template layers (titles/subtitles/topic/image). 8️⃣ HTTP Request (Download renders) – Downloads the rendered PNGs from Templated.io. 9️⃣ Google Drive (Upload) – Uploads the rendered images into the created folder. 10️⃣ Google Sheets (Save in DB) – Logs metadata (title, topic, folder link, description, timestamp, status). 🔗 Connectors used OpenAI GPT-4.1 (via n8n LangChain node) Templated.io API (design rendering) Pixabay API (stock image search) Google Drive (storage + folder management) Google Sheets (database / logging) Form Trigger (input collection) 🧱 Input / Output Input: User-submitted “Prompt” (text) via form Output: Generated carousel images stored in Google Drive Spreadsheet row in Google Sheets containing title, topic, description, Drive URL, status ⚠️ Requirements / Setup Valid credentials for: OpenAI API (GPT-4.1 access) Templated.io API key Pixabay API key Google Drive + Google Sheets OAuth connections Existing Google Drive folder ID for RRSS storage Spreadsheet with matching column headers (Created At, Title, Topic, Folder URL, Description, Status) Published form URL for user prompts 🌍 Example applications / extensions Educational themes (mental health, fitness, sustainability). Extend to auto-publish to Instagram Business via Meta API. Add Notion logging or automated email notifications. Integrate scheduling (Cron node) to batch-generate weekly carousels.

Bastian DiazBy Bastian Diaz
408

AI-powered Candidate Screening & Interview Scheduling with OpenAI GPT & Google Suite

Streamline your hiring process with intelligent AI-powered candidate screening and automated interview scheduling. This workflow receives applications via webhook, evaluates candidates using OpenAI's GPT model, scores them against job requirements, stores data in Google Sheets, and automatically schedules interviews for high-scoring candidates — all while sending personalized email notifications and updating statuses in real time. Reduce manual screening time and ensure only top candidates move forward. 🤖📧 --- What This Template Does Step 1: Triggers on new application submission via Webhook (e.g., from job portal or form). Step 2: Stores applicant data (resume, contact, role) into Google Sheets for centralized tracking. Step 3: Uses OpenAI GPT to evaluate candidate fit based on resume, skills, and job requirements. Step 4: Applies Scoring Logic:   • Score ≥ 70 → Qualified for interview   • Score < 70 → Not a fit Step 5: Branches based on score:   → High Score Path:    • Sends Interview Invitation Email    • Creates Google Calendar Event    • Updates Sheet: Status → “Interview Scheduled”   → Low Score Path:    • Sends Polite Rejection Email    • Updates Sheet: Status → “Rejected” Step 6: Final metrics logged and webhook response confirms completion. --- Key Benefits ✅ Eliminates manual resume screening ✅ AI evaluates candidates consistently and objectively ✅ Automates interview scheduling with calendar integration ✅ Real-time status updates in Google Sheets ✅ Personalized email communication at every stage ✅ Full audit trail of decisions and actions --- Features Webhook-triggered application intake Google Sheets as applicant tracking system (ATS) OpenAI GPT-powered candidate evaluation Dynamic scoring threshold (customizable) Conditional branching (High/Low Score) Gmail integration for email notifications Google Calendar auto-event creation Real-time status updates via sheet write-back Final webhook response for system confirmation --- Requirements GOOGLESHEETID: Your Google Sheet ID Credentials Needed: Google Sheets OAuth2 Gmail API Key OpenAI API Key Google Calendar OAuth2 Customize: • Job requirements & AI prompt • Score threshold (currently 70) • Email templates • Interview scheduling time slots --- Target Audience HR teams managing high-volume applications 👥 Recruiters seeking faster shortlisting ⏱️ Startups automating early-stage hiring 🚀 Tech companies with technical screening needs 💻 Remote-first organizations using digital workflows 🌍 --- Step-by-Step Setup Instructions Set up Google Sheet  → Create a sheet with columns: Name, Email, Resume Link, Role, Status, Score, Timestamp  → Replace YOURSHEETID in the workflow with your actual Sheet ID. Configure Webhook  → Connect your job application form (e.g., Typeform, LinkedIn, custom portal) to trigger this workflow. Add OpenAI API Key  → Insert your OpenAI key and customize the evaluation prompt under “AI Evaluation” node. Set Scoring Threshold  → Adjust the “IF – Check Score Threshold” node (default: ≥70 = pass). Connect Gmail & Calendar  → Enable Gmail OAuth2 and Google Calendar OAuth2.  → Define interviewer email and default interview duration. Customize Emails  → Edit “Interview Invitation” and “Rejection Notice” templates with your branding. Test the Flow  → Submit a test application via webhook.  → Verify: Sheet update → AI score → Email → Calendar event → Status change. Go Live  → Enable automation. Monitor first few runs in Google Sheets. --- Workflow Complete! Now sit back as AI screens, scores, schedules, and communicates — all without lifting a finger. Metrics to Track: Applications received Average AI score Interview rate Time to process

Oneclick AI SquadBy Oneclick AI Squad
223

Build a knowledge-based WhatsApp assistant with RAG, Gemini, Supabase & Google Docs

Workflow Execution Link: Watch Execution Video Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: sql CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: sql DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Knowledge Base Create a new Google Doc with the complete knowledge base about your business and replace the document ID in the "Content for the Training" node. Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Step 4: Setup Meta App for WhatsApp Business Cloud Go to https://business.facebook.com/latest/settings/apps, create an app, and select the use case "Connect with customer through WhatsApp". Copy the Client ID and Client Secret and add them to the first node. Go to that newly created META app in the app dashboard, click on the use case, and then click on "customise...". Go to the API setup, add your number, and also generate an access token on that page. Now paste the access token and the WhatsApp Business Account ID into the send message node. Part A: Document Preparation (One-Time Setup) When clicking ‘Execute workflow’ Type: manualTrigger Purpose: Manually starts the workflow for preparing training content. Content for the Training Type: googleDocs Purpose: Fetches the document content that will be used for training. Splitting into Chunks Type: code Purpose: Breaks the document text into smaller pieces for processing. Embedding Uploaded document Type: httpRequest Purpose: Converts each chunk into embeddings via an external API. Save the embedding in DB Type: supabase Purpose: Stores both the chunks and embeddings in the database for future use. --- Part B: Chat Interaction (Realtime Flow) WhatsApp Trigger Type: whatsAppTrigger Purpose: Starts the workflow whenever a user sends a WhatsApp message. If Type: if Purpose: Checks whether the incoming WhatsApp message contains text. Embend User Message Type: httpRequest Purpose: Converts the user’s message into an embedding. Search Embeddings Type: httpRequest Purpose: Finds the top matching document chunks from the database using embeddings. Aggregate Type: aggregate Purpose: Merges retrieved chunks into one context block. AI Agent Type: langchain agent Purpose: Builds the prompt combining user’s message and context. Google Gemini Chat Model Type: lmChatGoogleGemini Purpose: Generates the AI response based on the prepared prompt. Send message Type: whatsApp Purpose: Sends the AI’s reply back to the user on WhatsApp.

iamvaarBy iamvaar
189

🛠️ Matrix tool MCP server 💪 all 11 operations

Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all Matrix Tool operations to AI agents. Zero configuration needed - all 11 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Matrix Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Matrix Tool tool with full error handling 📋 Available Operations (11 total) Every possible Matrix Tool operation is included: 👤 Account (1 operations) • Get the current user's account information 📅 Event (1 operations) • Get an event by ID 🔧 Media (1 operations) • Upload media to a chatroom 💬 Message (2 operations) • Create a message • Get many messages 🔧 Room (5 operations) • Create a room • Invite a room • Join a room • Kick a user from a room • Leave a room 🔧 Roommember (1 operations) • Get many room members 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Matrix Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Matrix Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.

David AshbyBy David Ashby
128
All templates loaded