AI orchestrator: dynamically selects models based on input type
This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. --- Benefits Smart Model Routing: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability: Easily expandable by adding more request types or LLMs. Maintainability: Clear logic separation between classification, model routing, and execution. Personalization: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. --- How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. search: For queries requiring search tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. --- Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, search). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Multi-agent healthcare assistant with WhatsApp, GPT-4 & Google Sheets
Multi-Agent AI Healthcare Assistant Demo ⚠️ EDUCATIONAL DEMONSTRATION ONLY - NOT FOR PRODUCTION MEDICAL USE ⚠️ A comprehensive demonstration of n8n's advanced multi-agent AI orchestration capabilities, showcasing how to build sophisticated conversational AI systems with specialized agent coordination. 🎯 What This Demo Shows Advanced Multi-Agent Architecture: Main Orchestrator Agent - Traffic controller and decision maker Patient Registration Agent - Specialized data collection and validation Appointment Scheduler Agent - Complex multi-step booking workflows Medical Report Analyzer - Document processing and analysis Prescription Medicine Analyzer - Medicine verification and safety checks Technical Learning Objectives: Multi-agent coordination patterns Conditional agent routing and tool selection Memory management across conversations Multi-modal input processing (text, audio, images, documents) Complex state management in AI workflows External system integration (Google Sheets, WhatsApp, OpenAI) 🏗️ Architecture Highlights Multi-Modal Processing Pipeline: Text Messages → Direct agent processing Audio Messages → Transcription → Text processing → Audio response Images → Vision analysis → Context integration Documents → PDF extraction → Content analysis Agent Specialization: Each agent has focused responsibilities and constraints Intelligent document classification and routing Context-aware tool selection Error handling and recovery mechanisms Memory & State Management: Session-based conversation persistence Context sharing between specialized agents Multi-step workflow state tracking 🔧 Technical Implementation Key n8n Features Demonstrated: @n8n/n8n-nodes-langchain.agent - Main orchestrator @n8n/n8n-nodes-langchain.agentTool - Specialized sub-agents @n8n/n8n-nodes-langchain.memoryPostgresChat - Conversation memory n8n-nodes-base.googleSheetsTool - External data integration Complex conditional logic and routing Integration Patterns: WhatsApp Business API integration OpenAI GPT-4 model orchestration Google Sheets as data backend PostgreSQL for conversation memory Multi-step document processing 📚 Learning Value For n8n Developers: Enterprise-grade workflow architecture patterns AI agent orchestration best practices Complex conditional logic implementation Memory management in conversational AI Multi-modal data processing techniques Error handling and recovery strategies For AI Engineers: Agent specialization and coordination Tool calling and function integration Context management across conversations Multi-step workflow design Production workflow considerations ⚙️ Setup Requirements Required Credentials: OpenAI API Key (GPT-4 access recommended) WhatsApp Business API credentials Google Sheets OAuth2 API PostgreSQL database connection External Dependencies: Google Sheets database (template structure provided) WhatsApp Business Account PostgreSQL database for conversation memory 🚨 Important Disclaimers Educational Use Only: This is a DEMONSTRATION of n8n capabilities NOT suitable for actual medical use NOT HIPAA compliant Use only with fictional/test data Production Considerations: Requires proper security implementation Needs compliance review for medical use Consider HIPAA-compliant alternatives for healthcare Implement proper data encryption and access controls 🎓 Educational Applications Perfect for Learning: Advanced n8n workflow patterns Multi-agent AI system design Complex automation architecture Integration pattern best practices Conversational AI development Workshop & Training Use: AI automation workshops n8n advanced training sessions Multi-agent system demonstrations Integration pattern tutorials 🔄 Workflow Components Main Flow: WhatsApp message reception and media processing Input classification and routing Main agent orchestration and tool selection Specialized agent execution Response formatting and delivery Sub-Agents: Registration Tool - Patient data collection Scheduler Tool - Appointment booking logic Report Analyzer - Medical document analysis Medicine Analyzer - Prescription verification 💡 Customization Ideas Extend the Demo: Add more specialized agents Implement different communication channels Integrate with other healthcare APIs Add more sophisticated document processing Implement advanced analytics and reporting Adapt for Other Industries: Customer service automation Educational assistance systems E-commerce support workflows Technical support orchestration --- 🎯 Perfect for: Learning advanced n8n patterns, AI system architecture, multi-agent coordination ⏱️ Setup Time: 30-45 minutes (with credentials) 📈 Skill Level: Intermediate to Advanced 🏷️ Tags: AI Agents, Multi-Agent Systems, Healthcare Demo, Educational, Advanced Workflows
Full-cycle invoice automation: Airtable, QuickBooks & Stripe
This n8n template from Intuz provides a complete and automated solution for full-cycle invoicing, orchestrating a seamless flow between Airtable, QuickBooks, and Stripe. This is the ultimate sales-to-cash automation. When a deal in Airtable is marked "Approved for Invoicing," this workflow intelligently syncs customer data across QuickBooks and Stripe (creating them if they don't exist), generates an official QuickBooks invoice, creates a Stripe payment link, and then updates the original Airtable record with all the new IDs and links. Eliminate manual data entry and keep your systems perfectly in sync. Who's this workflow for? Finance, Accounting, and Operations Teams SalesOps and RevOps Teams Small Business Owners and Founders Agencies and Freelancers How It Works: Airtable Trigger & Approval Gate: The workflow starts when a record in your Airtable base is updated. An If node immediately checks if the Status field is set to "Approved for Invoicing." If not, the workflow for that item stops. Customer Sync (QuickBooks & Stripe): The workflow searches for the customer in both QuickBooks and Stripe using the details from Airtable. Using If nodes, it intelligently checks if the customer exists. If a customer is not found in either platform, it creates a new one. This "find-or-create" logic prevents duplicate records. Update Airtable with IDs: Once the customer IDs from both QuickBooks and Stripe are secured (either found or newly created), the workflow updates the original Airtable record with these new IDs for future reference. Generate Financials: Stripe Payment Link: It sends an HTTP request to Stripe to create a unique, ready-to-use payment link for the specified amount. QuickBooks Invoice: It fetches your product list from QuickBooks, finds the matching item from the Airtable record, and generates a formal, detailed invoice. Close the Loop: In the final step, the workflow updates the Airtable record one last time to: Add the QuickBooks Invoice . Add the Stripe Payment Link. Change the Status to "Invoiced." Step-by-Step Setup Instructions This is an advanced workflow. Follow these setup steps carefully. Connect Your Credentials Airtable: Create and connect a Personal Access Token with data.records:read and data.records:write scopes. QuickBooks: Connect your QuickBooks Online account using OAuth2 credentials. Stripe: Connect your Stripe account using your Secret Key. Airtable Base Setup (Crucial) Your Airtable base must have a table with the following columns. The names must match exactly: Deal Name (Text) Client Name (Text) Client Email (Email) Status (Single-select with options: Draft, Approved for Invoicing, Invoiced) QuickBooks Customer ID (Text) Stripe Customer ID (Text) Stripe Payment Link (URL) QuickBooks Invoice (Text) Stripe Price Id (Text - The API ID of your price in Stripe, e.g., price_123...) Quantity (Number) Quickbooks Product Name (Text) Created (Created Time) - This is used by the trigger. Configure the n8n Nodes All Airtable Nodes: In each Airtable node, select your Base and Table from the dropdown lists. Get all Quickbook products (HTTP Request Node): You must replace {YOURQUICKBOOKSCOMPANY_ID} in the URL with your actual QuickBooks Company ID (also known as a Realm ID). Activate the Workflow Save the workflow and toggle the Active switch to "on". The workflow will now trigger whenever the Created field is updated for a record in your Airtable base. Customization Guidance Changing the Trigger Status: If you use a different status than "Approved for Invoicing," simply update the value in the "IF - Status Check" node. Modifying Invoice Details: You can customize the Description or other line item details in the "Create an invoice" (QuickBooks) node by pulling more fields from your Airtable record. Adding Email Notifications: To notify a customer when their invoice is ready, add a Gmail or SendGrid node after the last Airtable Update node. You can include the Stripe Payment Link and a PDF of the QuickBooks invoice directly in the email. Advanced Error Handling: For a production environment, consider connecting the false output of the various IF nodes or using the .onError() workflow setting to send a Slack or email alert if a customer can't be found or an API call fails. Support For further support, or to develop a custom workflow, reach out to: Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Worflow Automation Click here- Get Started
Automate Google Classroom with Gemini AI: Topics, assignments & student tracking
Automate Google Classroom: Topics, Assignments & Student Tracking Automate Google Classroom via the Google Classroom API to efficiently manage courses, topics, teachers, students, announcements, and coursework. Use Cases Educational Institution Management Sync rosters, post weekly announcements, and generate submission reports automatically. Remote Learning Coordination Batch-create assignments, track engagement, and auto-notify teachers on new submissions. Training Program Automation Automate training modules, manage enrollments, and generate completion/compliance reports. Prerequisites n8n (cloud or self-hosted) Google Cloud Console access for OAuth setup Google Classroom API enabled Google Gemini API key (free) for the agent brain — or swap in any other LLM if preferred Setup Instructions Step 1: Google Cloud Project Create a new project in Google Cloud Console. Enable Google Classroom API. Create OAuth 2.0 Client ID credentials. Add your n8n OAuth callback URL as a redirect URI. Note down the Client ID and Client Secret. Step 2: OAuth Setup in n8n In n8n, open HTTP Request Node → Authentication → Predefined Credential Type. Select Google OAuth2 API. Enter your Client ID and Client Secret. Click Connect my account to complete authorization. Test the connection. Step 3: Import & Configure Workflow Import this workflow template into n8n. Link all Google Classroom nodes to your OAuth credential. Configure the webhook if using external triggers. Test each agent for API connectivity. Step 4: Customization You can customize each agent’s prompt to your liking for optimal results, or copy and modify node code to expand functionality. All operations use HTTP Request nodes, so you can integrate more tools via the Google Classroom API documentation. This workflow provides a strong starting point for deeper automation and integration. Features Course Topics List, create, update, or delete topics within a course. Teacher & Student Management List, retrieve, and manage teachers and students programmatically. Course Posts List posts, retrieve details and attachments, and access submission data. Announcements List, create, update, or delete announcements across courses. Courses List all courses, get detailed information, and view grading periods. Coursework List, retrieve, or analyze coursework within any course. Notes Once OAuth and the LLM connection are configured, this workflow automates all Google Classroom operations. Its modular structure lets you activate only what you need—saving API quota and improving performance.
Process voice, images & documents with GPT-4o, MongoDB & Gmail tools
What it does This n8n workflow creates a cutting-edge, multi-modal AI Memory Assistant designed to capture, understand, and intelligently recall your personal or business information from diverse sources. It automatically processes voice notes, images, documents (like PDFs), and text messages sent via Telegram. Leveraging GPT-4o for advanced AI processing (including visual analysis, document parsing, transcription, and semantic understanding) and MongoDB Atlas Vector Search for persistent and lightning-fast recall, this assistant acts as an external brain. Furthermore, it integrates with Gmail, allowing the AI to send and search emails as part of its memory and response capabilities. This end-to-end solution blurprint provides a powerful starting point for personal knowledge management and intelligent automation. How it works Multi-Modal Input Ingestion 🗣️📸📄💬 Your memories begin when you send a voice note, an image, a document (e.g., PDF), or a text message to your Telegram bot. The workflow immediately identifies the input type. Advanced AI Content Processing 🧠✨ Each input type undergoes specialized AI processing by GPT-4o: Voice notes are transcribed into text using OpenAI Whisper. Images are visually analyzed by GPT-4o Vision, generating detailed textual descriptions. Documents (PDFs) are processed for text extraction, leveraging GPT-4o for robust parsing and understanding of content and structure. Unsupported document types are gracefully handled with a user notification. Text messages are directly forwarded for further processing. This phase transforms all disparate input formats into a unified, rich textual representation. Intelligent Memory Chunking & Vectorization ✂️🏷️➡️🔢 The processed content (transcriptions, image descriptions, extracted document text, or direct text) is then fed back into GPT-4o. The AI intelligently chunks the information into smaller, semantically coherent pieces, extracts relevant keywords and tags, and generates concise summaries. Each of these enhanced memory chunks is then converted into a high-dimensional vector embedding using OpenAI Embeddings. Persistent Storage & Recall (MongoDB Atlas Vector Search) 💾🔍 These vector embeddings, along with their original content, metadata, and tags, are stored in your MongoDB Atlas cluster, which is configured with Atlas Vector Search. This allows for highly efficient and semantically relevant retrieval of memories based on user queries, forming the core of your "smart recall" system. AI Agent & External Tools (Gmail Integration) 🤖🛠️ When you ask a question, the AI Agent (powered by GPT-4o) acts as the central intelligence. It uses the MongoDB Chat Memory to maintain conversational context and, crucially, queries the MongoDB Atlas Vector Search store to retrieve relevant past memories. The agent also has access to Gmail tools, enabling it to send emails on your behalf or search your past emails to find information or context that might not be in your personal memory store. Smart Response Generation & Delivery 💬➡️📱 Finally, using the retrieved context from MongoDB and the conversational history, GPT-4o synthesizes a concise, accurate, and contextually aware answer. This response is then delivered back to you via your Telegram bot. How to set it up (~20 Minutes) Getting this powerful workflow running requires a few key configurations and external service dependencies. Telegram Bot Setup: Use BotFather in Telegram to create a new bot and obtain its API Token. In your n8n instance, add a new Telegram API credential. Give it a clear name (e.g., "My AI Memory Bot") and paste your API Token. OpenAI API Key Setup: Log in to your OpenAI account and generate a new API key. Within n8n, create a new OpenAI API credential. Name it appropriately (e.g., "My OpenAI Key for GPT-4o") and paste your API key. This credential will be used by the OpenAI Chat Model (GPT-4o for processing, chunking, and RAG), Analyze Image, and Transcribe Audio nodes. MongoDB Atlas Setup: If you don't have one, create a free-tier or paid cluster on MongoDB Atlas. Create a database and a collection within your cluster to store your memory chunks and their vector embeddings. Crucially, configure an Atlas Vector Search index on your chosen collection. This index will be on the field containing your embeddings (e.g., embedding field, type knnVector). Refer to MongoDB Atlas documentation for detailed instructions on creating vector search indexes. In n8n, add a new MongoDB credential. Provide your MongoDB Atlas connection string (ensure it includes your username, password, and database name), and give it a clear name (e.g., "My Atlas DB"). This credential will be used by the MongoDB Chat Memory node and for any custom HTTP requests you might use for Atlas Vector Search insertion/querying. Gmail Account Setup: Go to Google Cloud Console, enable the Gmail API for your project, and configure your OAuth consent screen. Create an OAuth 2.0 Client ID for a Desktop app (or Web application, depending on your n8n setup and redirect URI). Download the JSON credentials. In n8n, add a new Gmail OAuth2 API credential. Follow the n8n instructions to configure it using your Google Client ID and Client Secret, and authenticate with your Gmail account, ensuring it has sufficient permissions to send and search emails. External API Services: If your Extract from File node relies on an external service for robust PDF/DocX text extraction, ensure you have an API key and the service is operational. The current flow uses ConvertAPI. Add the necessary credential (e.g., ConvertAPI) in n8n. How you could enhance it ✨ This workflow offers numerous avenues for advanced customization and expansion: Expanded Document Type Support: Enhance the "Document Processing" section to handle a wider range of document types beyond just PDFs (e.g., .docx, .xlsx, .pptx, markdown, CSV) by integrating additional conversion APIs or specialized parsing libraries (e.g., using a custom code node or dedicated third-party services like Apache Tika, Unstructured.io). Fine-tuned Memory Chunks & Metadata: Implement more sophisticated chunking strategies for very long documents, perhaps based on semantic breaks or document structure (headings, sections), to improve recall accuracy. Add more metadata fields (e.g., original author, document date, custom categories) to your MongoDB entries for richer filtering and context. Advanced AI Prompting: Allow users to dynamically set parameters for their memory inputs (e.g., "This is a high-priority meeting note," "This image contains sensitive information") which can influence how GPT-4o processes, tags, and stores the memory, or how it's retrieved later. n8n Tool Expansion for Proactive Actions: Significantly expand the AI Agent's capabilities by providing it with access to a wider range of n8n tools, moving beyond just information retrieval and email External Data Source Integration (APIs): Expand the AI Agent's tools to query other external APIs (e.g., weather, stock prices, news, CRM systems) so it can provide real-time information relevant to your memories. Getting Assistance & More Resources Need assistance setting this up, adapting it to a unique use case, or exploring more advanced customizations? Don't hesitate to reach out! You can contact me directly at nanabrownsnr@gmail.com. Also, feel free to check out my Youtube Channel where I discuss other n8n templates, as well as Innovation and automation solutions.
AI-powered contact management in KlickTipp with MCP server
Community Node Disclaimer: This workflow uses KlickTipp community nodes. How It Works This workflow connects an MCP Server with the KlickTipp contact management platform and integrates it with an LLM (e.g. Claude etc.) to enable intelligent querying and segmentation of contact data. It covers all major KlickTipp API endpoints, providing a comprehensive toolkit for automated contact handling and campaign targeting. Key Features MCP Server Trigger: Initiates the workflow via the MCP server, listening for incoming requests related to contact queries or segmentation actions. LLM Interaction Setup: Interacts with an OpenAI or Claude model to handle natural language queries such as contact lookups, tagging, and segmentation tasks. KlickTipp Integration: Complete set of KlickTipp API endpoints included: Contact Management: Add, update, get, list, delete, and unsubscribe contacts. Contact Tagging: Tag, untag, list tagged contacts. Tag Operations: Create, get, update, delete, list tags. Opt-In Processes: List and retrieve opt-in process details. Data Fields: List and get custom data fields. Redirects: Retrieve redirect URLs. Use Cases Supported: Query contact information via email or name. Identify and segment contacts by city, region, or behavior. Create or update contacts from the provided data. Dynamically apply or remove tags to initiate campaigns. Automate targeted outreach based on contact attributes. Setup Instructions Install and Configure Nodes: Set up MCP Server. Configure the LLM connection (e.g., Claude Desktop configuration). Add and authenticate all KlickTipp nodes using valid API credentials. Define Tagging and Field Mapping: Identify which fields and tags are relevant to your use cases. Ensure necessary tags and custom fields are already created in KlickTipp. Workflow Logic: Trigger via MCP Server: A prompt or webhook call activates the server listener. Query Handling via LLM Agent: AI interprets the natural language input and determines the action. Contact Search & Segmentation: Searches contacts using identifiers (email, address) or criteria. Data Operations: Retrieves, updates, or manages contact and tag data based on interpreted command. Campaign Preparation: Applies tags or sends campaign triggers depending on query results. Benefits: AI-Powered Automation: Reduces manual contact search and tagging efforts through intelligent processing. Scalable Integration: Built-in support for full range of KlickTipp operations allows diverse use-case handling. Data Consistency: Ensures structured data flows between MCP, AI, and KlickTipp, minimizing errors. Testing and Deployment: Use defined prompts such as: “Tell me something about the contact with email address X” “Tag all contacts from region Y” “Send campaign Z to customers in area A” Validate expected actions in KlickTipp after prompt execution. Notes: Customization: Adjust tag logic, AI prompts, and contact field mappings based on project needs. Extensibility: The template can be expanded with further logic for Google Sheets input or campaign feedback loops Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
Search & enrich: Smart keyword analysis with Decodo + OpenAI GPT-4.1-mini
Disclaimer Please note - This workflow is only available on n8n self-hosted as it's making use of the community node for the Decodo Web Scraping This workflow automates intelligent keyword and topic extraction from Google Search results, combining Decodo’s advanced scraping engine with OpenAI GPT-4.1-mini’s semantic analysis capabilities. The result is a fully automated keyword enrichment pipeline that gathers, analyzes, and stores SEO-relevant insights. Who this is for This workflow is ideal for: SEO professionals who want to extract high-value keywords from competitors. Digital marketers aiming to automate topic discovery and keyword clustering. Content strategists building data-driven content calendars. AI automation engineers designing scalable web intelligence and enrichment pipelines. Growth teams performing market and search intent research with minimal effort. What problem this workflow solves Manual keyword research is time-consuming and often incomplete. Traditional keyword tools only provide surface-level data and fail to uncover contextual topics or semantic relationships hidden in search results. This workflow solves that by: Automatically scraping live Google Search results for any keyword. Extracting meaningful topics, related terms, and entities using AI. Enriching your keyword list with semantic intelligence to improve SEO and content planning. Storing structured results directly in n8n Data Tables for trend tracking or export. What this workflow does Here’s a breakdown of the flow: Set the Input Fields – Define your search query and target geo (e.g., “Pizza” in “India”). Decodo Google Search – Fetches organic search results using Decodo’s web scraping API. Return Organic Results – Extracts the list of organic results and passes them downstream. Loop Over Each Result – Iterates through every search result description. Extract Keywords and Topics – Uses OpenAI GPT-4.1-mini to identify relevant keywords, entities, and thematic topics from each snippet. Data Enrichment Logic – Checks whether each result already exists in the n8n Data Table (based on URL). Insert or Skip – If a record doesn’t exist, inserts the extracted data into the table. Store Results – Saves both enriched search data and Decodo’s original response to disk. End Result: A structured and deduplicated dataset containing URLs, keywords, and key topics — ready for SEO tracking or further analytics. Setup Pre-requisite If you are new to Decode, please signup on this link visit.decodo.com Please make sure to install the n8n custom node for Decodo. Import and Configure the Workflow Open n8n and import the JSON template. Add your credentials: Decodo API Key under Decodo Credentials account*. OpenAI API Key under OpenAI Account*. Define Input Parameters Modify the Set node to define: search_query: your keyword or topic (e.g., “AI tools for marketing”) geo: the target region (e.g., “United States”) Configure Output The workflow writes two outputs: Enriched keyword data → Stored in n8n Data Table (DecodoGoogleSearchResults). Raw Decodo response → Saved locally in JSON format. Execute Click Execute Workflow or schedule it for recurring keyword enrichment (e.g., weekly trend tracking). How to customize this workflow Change AI Model — Replace gpt-4.1-mini with gemini-1.5-pro or claude-3-opus for testing different reasoning strengths. Expand the Schema — Add extra fields like keyword difficulty, page type, or author info. Add Sentiment Analysis — Chain a second AI node to assess tone (positive, neutral, or promotional). Export to Sheets or DB — Replace the Data Table node with Google Sheets, Notion, Airtable, or MySQL connectors. Multi-Language Research — Pass a locale parameter in the Decodo node to gather insights in specific languages. Automate Alerts — Add a Slack or Email node to notify your team when high-value topics appear. Summary Search & Enrich is a low-code AI-powered keyword intelligence engine that automates research and enrichment for SEO, content, and digital marketing. By combining Decodo’s real-time SERP scraping with OpenAI’s contextual understanding, the workflow transforms raw search results into structured, actionable keyword insights. It eliminates repetitive research work, enhances content strategy, and keeps your keyword database continuously enriched — all within n8n.
Monitor Instagram competitor trends with Claude 3.5 & multi-channel alerts
This enterprise-grade n8n workflow automates competitor monitoring on Instagram — from post fetching to AI-driven strategy alerts — using Claude AI, Instagram API, and multi-channel notifications. It tracks trends, analyzes performance, and delivers actionable insights via WhatsApp and email, keeping your team ahead with zero manual effort. Key Features Daily competitor scanning from Google Sheets Post performance metrics (engagement rate, trends) calculated automatically AI-powered insights using Claude 3.5 Sonnet for content and engagement strategies Dual-channel alerts: WhatsApp (Twilio) and email for instant delivery Audit logs in Google Sheets for historical trends Scalable triggers: Daily schedule or webhook for ad-hoc checks Workflow Process | Step | Node | Description | | ---- | ----------------------------------- | -------------------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 10 AM or via webhook (/competitor-alert) | | 2 | Get Competitor List | Loads competitors from Competitors sheet | | 3 | Loop Over Competitors | Processes each competitor to avoid API limits | | 4 | Get Competitor Posts | Fetches last 10 posts via Instagram Graph API | | 5 | Calculate Performance Metrics | Computes avg engagement and trend using Code node | | 6 | Generate AI Insights (Claude AI)| Analyzes data for 3 strategic bullet-point insights | | 7 | Send Email Alert | Emails detailed report to team | | 8 | Send WhatsApp Alert (Twilio) | Sends concise alert via WhatsApp | | 9 | Log Alert | Records metrics and insights in AlertsLog sheet | | 10 | End Workflow | Terminates execution | Setup Instructions Import Workflow Open n8n → Workflows → Import from Clipboard Paste the JSON workflow Configure Credentials | Integration | Details | | ----------------- | -------------------------------------------------- | | Google Sheets | Service account with spreadsheet access | | Instagram API | Business access token for media fetching | | Claude AI | Anthropic API key for claude-3-5-sonnet-20241022 | | Twilio | Credentials for WhatsApp messaging | | SMTP/Email | SMTP or Gmail for email alerts | Update Spreadsheet IDs Ensure your Google Sheets include: Competitors AlertsLog Set Triggers Webhook: /webhook/competitor-alert (for on-demand runs) Schedule: Daily at 10:00 AM Run a Test Use manual execution to confirm: Post fetching and metrics calculation AI insights generation WhatsApp/email delivery and sheet logging Google Sheets Structure Competitors | competitorName | competitorUserId | industryFocus | |----------------|------------------|---------------| | BrandX | 1234567890 | Fashion | AlertsLog | competitor | avgEngagement | trend | insights | timestamp | |---------------|----------------|--------|-----------------------------------|--------------------| | BrandX | 75.5 | Rising | - Bullet 1... | 2023-10-01T12:00:00Z | System Requirements | Requirement | Version/Access | | --------------------- | ---------------------------------------------- | | n8n | v1.50+ (AI and messaging integrations supported)| | Claude AI API | claude-3-5-sonnet-20241022 | | Instagram Graph API| Business account access token | | Twilio API | WhatsApp-enabled phone number | | Google Sheets API | https://www.googleapis.com/auth/spreadsheets | | SMTP | For email (e.g., Gmail OAuth) | Optional Enhancements Add visual charts (e.g., engagement trends via Google Charts) Integrate Slack for team-wide alerts Use advanced metrics like reach/impressions via Instagram Insights API Connect CRM (HubSpot) to tag competitors Enable multi-platform monitoring (e.g., TikTok) Add threshold-based alerts (e.g., only if engagement >20% increase) Export insights to Notion or Airtable for strategy docs Result: A single automated system that monitors competitors, uncovers trends, and arms your team with AI strategies — delivered via WhatsApp and email with zero manual work. Get in touch with us for custom n8n automation!
Analyze food ingredients from Telegram photos using Gemini and Airtable
Analyze food ingredients from Telegram photos using Gemini and Airtable 🛡️ Personal Ingredient Bodyguard Turn your Telegram bot into an intelligent food safety scanner. This workflow analyzes photos of ingredient labels sent via Telegram, extracts the text using AI, and cross-references it against your personal database of "Good" and "Bad" ingredients in Airtable. It solves the problem of manually reading tiny, complex labels for allergies or dietary restrictions. Whether you are Vegan, Halal, allergic to nuts, or just avoiding specific additives, this workflow acts as a strict, personalized bodyguard for your diet. It even features a customizable "Persona" (like a Sarcastic Bodyguard) to make safety checks fun. 🎯 Who is it for? People with specific dietary restrictions (Vegan, Gluten-free, Keto). Individuals with food allergies (Nuts, Dairy, Shellfish). Special dietary observers (Halal, Kosher). Health-conscious shoppers avoiding specific additives (e.g., E120, Aspartame). 🚀 How it works Trigger: You send a photo of a product label to your Telegram Bot. Fetch Rules: The workflow retrieves your active "Watchlist" (Ingredients to avoid/prefer) and "Persona" settings from Airtable. Vision & Logic: It uses an AI Vision model to extract text from the image (OCR) and Google Gemini to analyze the text against your strict veto rules (e.g., "Safe" only if ZERO bad items are found). Response: The bot replies instantly on Telegram with a Safe/Unsafe verdict, highlighting detected ingredients using HTML formatting. Log: The result is saved back to Airtable for your records. ⚙️ How to set up This workflow relies on a specific Airtable structure to function as the "Brain." Set up Airtable Sign up for Airtable: Click here Copy the required Base: Click here to copy the "Ingredients Brain" base Connect Airtable to n8n (5-min guide): Watch Tutorial Set up Telegram Message @BotFather on Telegram to create a new bot and get your API Token. Add your Telegram credentials in n8n. Configure AI Add your Google Gemini API credentials. Note on OCR: This template is configured to use a local LLM for OCR to save costs (via the OpenAI-compatible node). If you do not have a local model running, simply swap the "OpenAI Chat Model" node for a standard GPT-4o or Gemini Vision node. 📋 Requirements n8n (Cloud or Self-hosted) Airtable account (Free tier works) Telegram account Google Gemini API Key Local LLM (Optional, for free OCR) OR OpenAI/Gemini Key (for standard Cloud Vision) 🎨 How to customize Change the Persona: Go to the "Preferences" table in Airtable to change the bot's personality (e.g., "Helpful Nutritionist") and output language. Update Ingredients: Add or remove items in the "Watchlist" table. Mark them as "Good Stuff" or "Bad Stuff" and set Status to "Active". Adjust Sensitivity: The AI prompt in the "AI Agent" node is set to strict "Veto" mode (Bad overrides Good). You can modify the system prompt to change this logic. ⚠️ Disclaimer This tool is for informational purposes only. Not Medical Advice: Do not rely on this for life-threatening allergies. AI Limitations: OCR can misread text, and AI can hallucinate. Verify: Always double-check the physical product label. Use at your own risk.
PartnerStack/Impact → WooCommerce product creation with GPT-4 & AI images
🚀 PartnerStack/Impact → WooCommerce (AI-Powered Product Automation) Turn affiliate programs into fully published WooCommerce products—on autopilot. This n8n template pulls offers from PartnerStack/Impact (or your own links), generates SEO copy and images with AI, and publishes External products to WordPress/WooCommerce—hands-free. --- 🎯 What This Automation Does ⏰ Runs on a schedule (e.g., daily at 10:00 or every 3 days) 📄 Reads rows from Google Sheets (your affiliate product registry) 🔗 Inserts your affiliate link (PartnerStack/Impact/CJ or manual) 🌐 Fetches product/landing page and parses key details 🤖 Uses AI to create product title, short & long HTML description 🖼️ Creates a product image from an AI image prompt 🗂️ Uploads media to WordPress, sets alt/title/caption 🛒 Creates a WooCommerce External product via REST API 🏷️ Applies category by ID and sets the featured image ✅ Marks the row as published to avoid duplicates 🧯 Graceful error handling (failed fetches are flagged & skipped next run) 🛒 Output of Wordpress Website Product: [](https://brenttechnologies.com/) --- 🧑🏫 Step-by-Step Video Tutorial 🎥 Watch the implementation tutorial: [](https://youtu.be/Pifwn32vlQk) 📌 Live demo: schedule, sheet → product, AI copy/image, REST publish. --- 🌐 Useful Links 🔗 Start with n8n (Cloud or Self-hosted): 👉 https://n8n.io | Guide: https://syncbricks.com/self-hosting-n8n-on-ubuntu-24-04-a-step-by-step-guide/ 🧠 OpenAI API (text + image): 👉 https://platform.openai.com/docs 🤝 PartnerStack: 👉 https://partnerstack.com 📄 Google Sheets API: 👉 https://developers.google.com/sheets/api --- 🛠 Prerequisites ✅ n8n (self-hosted or Cloud) ✅ WordPress + WooCommerce with REST API enabled ✅ WordPress Application Password / API credential with media & products scope ✅ OpenAI (or your preferred AI) API key ✅ Google Sheet with minimum columns: AdvertiserIdandCompaignID (unique key) AdvertiserUrl (merchant/product URL) TrackingLink (your affiliate URL) Brand partner_status (e.g., “Active”) product_published (Yes/blank) error (Yes/blank) --- 📋 Step-by-Step Implementation 1️⃣ Scheduling & Intake Add a Schedule Trigger (daily, every 3 days, or your cadence). Google Sheets → Read: pull rows from your “impact/partnerstack” tab. Filter rows: partnerstatus = Active AND productpublished != Yes AND error != Yes. Limit to 1 per run (safe scaling). Increase later if needed. 2️⃣ Product Discovery & Parsing HTTP Request the AdvertiserUrl to fetch the landing/product page. Parse title/meta/summary; continue on error and flag failures (so runs don’t break). 3️⃣ AI Content Generation Agent 1 – Basics: Product Name, Short Description, Category ID mapping (match your Woo categories). Agent 2 – Long Form: SEO-ready HTML description (H2/H3, bullets, features, benefits, target users, technical highlights). Agent 3 – Image Prompt: Generate a clean product-hero prompt; then AI Image generation. 4️⃣ Media Handling Upload media to WordPress (/wp/v2/media) with title/alt/caption and proper content-type. Capture the media ID for the next step. 5️⃣ WooCommerce Product Creation Create product via POST /wc/v3/products with: type: "external" name, short_description, description external_url: TrackingLink button_text: "Sign Up" (or “Buy Now” / “Get It Now”) status: "publish" (or “draft” if you want manual review) Attach featured image with the uploaded media ID. Set categories by ID (ensure your mapping is correct). 6️⃣ Post-Publish Updates Update the Google Sheet row using the unique key (AdvertiserIdandCompaignID): product_published = Yes Clear/reset any transient error flags. 7️⃣ Error Handling & Idempotency If fetch fails (e.g., Cloudflare/Turnstile), mark error = Yes and skip that row next run. Filter excludes error = Yes and already-published rows—no accidental duplicates. --- 💰 Optional Monetization & Distribution Blog & Social: Add branches to generate a blog article, LinkedIn/Twitter posts from the same product data. Video: Create a short promo video (Sora/Gen-AI) and auto-schedule to TikTok/YouTube/Instagram. Email: Trigger a campaign (e.g., Brevo/Mailchimp) for new products added this week. --- 💡 Advanced Customizations Draft workflow: publish products as draft for human QA. Category Mapper: expand the category ID table to fit your Woo taxonomy. Image sizing: add an optional resize/optimize node before upload. Batching: remove the Limit node to publish multiple products per run (respect rate limits). Per-brand theming: vary prompts (tone/structure) based on Brand column. --- 🧰 Troubleshooting | Issue | Fix | | ----------------------------- | ---------------------------------------------------------------------------------------- | | WordPress 401/403 | Re-create Application Password; ensure correct base URL & permalinks | | Image upload fails | Check content-type & binary upload settings; verify max upload size on server | | Product lacks image/category | Confirm media ID capture & category ID mapping | | Duplicate items | Ensure product_published is set to Yes after success; keep Limit node during testing | | Fetch errors on merchant site | Keep “continue on error”; route to set error = Yes, then review manually | --- 🙌 Why Use This Template ⏱️ Saves hours of manual listing work 📈 SEO-consistent product pages every time 🧠 AI-quality copy & images with your tone 🔗 Affiliate link everywhere, fully tracked 🛠️ Extensible to blog, social, video, and email --- 🚀 Get Started Now Import the template → Connect credentials → Point to your Sheet → Run once → Enable schedule. Need help or a DFY build? SyncBricks can implement and customize this for your stack. 👉 Amjid Ali — https://linkedin.com/in/amjidali 🌐 Website — https://amjidali.com | https://syncbricks.com.au ▶️ YouTube — https://youtube.com/@syncbricks --- Knowledge Base: woocommerce-rest, partnerstack, impact, affiliate-automation, openai, n8n, google-sheets, content-generation, image-generation, wordpress-api, creator-hub
Automate YP.com directory scraping to Google Sheets using BrowserAct
Automate Directory Scraping to Google Sheets using BrowserAct This n8n template helps you generate local business leads by automatically scraping online directories and saving the results directly to a spreadsheet. This workflow is perfect for sales teams, marketing agencies, or anyone looking to build a list of local business leads by scraping online directories like YP.com. --- Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. --- How it works The workflow is triggered manually. You can set the businesscategory and citylocation inputs in the "Run a workflow task" node. A BrowserAct node initiates the web scraping task on your BrowserAct account using the template specified. A second BrowserAct node ("Get details of a workflow task") patiently waits for the scraping job to finish before allowing the workflow to proceed. A Code node takes the raw output from the scraper (which is a single JSON string) and correctly parses it, splitting the data into individual items for each business. Finally, a Google Sheets node appends or updates each business as a new row in your spreadsheet, matching on "Company Name" to prevent duplicates. --- Requirements BrowserAct API account for web scraping BrowserAct "Online Directory Lead Scraper (YP.com)" Template BrowserAct n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets credentials for saving the leads --- Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node --- Workflow Guidance and Showcase STOP Manual Leads! Automate Lead Gen with BrowserAct & n8n