🔐🦙🤖 Private & local Ollama self-hosted AI assistant
Transform your local N8N instance into a powerful chat interface using any local & private Ollama model, with zero cloud dependencies ☁️. This workflow creates a structured chat experience that processes messages locally through a language model chain and returns formatted responses 💬. How it works 🔄 💭 Chat messages trigger the workflow 🧠 Messages are processed through Llama 3.2 via Ollama (or any other Ollama compatible model) 📊 Responses are formatted as structured JSON ⚡ Error handling ensures robust operation Set up steps 🛠️ 📥 Install N8N and Ollama ⚙️ Download Ollama 3.2 model (or other model) 🔑 Configure Ollama API credentials ✨ Import and activate workflow This template provides a foundation for building AI-powered chat applications while maintaining full control over your data and infrastructure 🚀.
Vector database as a big data analysis tool for AI agents [3/3 - anomaly]
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. This is the third pipeline --- the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Anomaly Detection Tool This is the tool that can be used directly for anomalous images (crops) detection. It takes as input (any) image URL and returns a text message telling if whatever this image depicts is anomalous to the crop dataset stored in Qdrant. An Image URL is received via the Execute Workflow Trigger*, which is used to generate embedding vectors using the Voyage AI Embeddings API. The returned vectors are used to query the Qdrant collection to determine if the given crop is known by comparing it to threshold scores of each image class (crop type). If the image scores lower than all thresholds, then the image is considered an anomaly for the dataset.
Automate restaurant sales & inventory forecasting with Gemini AI & Google Sheets
This automated n8n workflow performs weekly forecasting of restaurant sales and raw material requirements using historical data from Google Sheets and AI predictions powered by Google Gemini. The forecast is then emailed to stakeholders for efficient planning and waste reduction. What is Google Gemini AI? Google Gemini is an advanced AI model that analyzes historical sales data, seasonal patterns, and market trends to generate accurate forecasts for restaurant sales and inventory requirements, helping optimize purchasing decisions and reduce waste. Good to Know Google Gemini AI forecasting accuracy improves over time with more historical data Weekly forecasting provides better strategic planning compared to daily predictions Google Sheets access must be properly authorized to avoid data sync issues Email notifications ensure timely review of weekly forecasts by stakeholders The system analyzes trends and predicts upcoming needs for efficient planning and waste reduction How It Works Trigger Weekly Forecast - Automatically starts the workflow every week at a scheduled time Load Historical Sales Data - Pulls weekly sales and material usage data from Google Sheets Format Input for AI Agent - Transforms raw data into a structured format suitable for the AI Agent Generate Forecast with AI - Uses Gemini AI to analyze trends and predict upcoming needs Interpret AI Forecast Output - Parses the AI's response into readable, usable JSON format Log Forecast to Google Sheets - Stores the new forecast data back into a Google Sheet Email Forecast Summary - Sends a summary of the forecast via Gmail for stakeholder review Data Sources The workflow utilizes Google Sheets as the primary data source: Historical Sales Data Sheet - Contains weekly sales and inventory data with columns: Week/Date (date) Menu Item (text) Sales Quantity (number) Revenue (currency) Raw Material Used (number) Inventory Level (number) Category (text) Forecast Output Sheet - Contains AI-generated predictions with columns: Forecast Week (date) Menu Item (text) Predicted Sales (number) Recommended Inventory (number) Material Requirements (number) Confidence Level (percentage) Notes (text) How to Use Import the workflow into n8n Configure Google Sheets API access and authorize the application Set up Gmail credentials for forecast report delivery Create the required Google Sheets with the specified column structures Configure Google Gemini AI API credentials Test with sample historical sales data to verify predictions and email delivery Adjust forecasting parameters based on your restaurant's specific needs Monitor and refine the system based on actual vs. predicted results Requirements Google Sheets API access Gmail API credentials Google Gemini AI API credentials Historical sales and inventory data for initial training Customizing This Workflow Modify the Generate Forecast with AI node to focus on specific menu categories, seasonal adjustments, or local market conditions. Adjust the email summary format to match your restaurant's reporting preferences and add additional data sources like supplier information, weather data, or special events calendar for more accurate predictions.
Comprehensive customer support with OpenAI O3 + GPT-4.1-mini multi-agent team
Support Director Agent with Customer Support Team Description Complete AI-powered customer support department with a Support Director agent orchestrating specialized support team members for comprehensive customer service operations. Overview This n8n workflow creates a comprehensive customer support department using AI agents. The Support Director agent analyzes support requests and delegates tasks to specialized agents for tier 1 support, technical assistance, customer success, knowledge management, escalation handling, and quality assurance. Features Strategic Support Director agent using OpenAI O3 for complex support decision-making Six specialized support agents powered by GPT-4.1-mini for efficient execution Complete customer support lifecycle coverage from first contact to resolution Automated technical troubleshooting and documentation creation Customer success and retention strategies Escalation management for priority issues Quality assurance and performance monitoring Team Structure Support Director Agent: Strategic support oversight and task delegation (O3 model) Tier 1 Support Agent: First-line support, basic troubleshooting, account assistance Technical Support Specialist: Complex technical issues, API debugging, integrations Customer Success Advocate: Onboarding, feature adoption, retention strategies Knowledge Base Manager: Help articles, FAQs, documentation creation Escalation Handler: Priority issues, VIP customers, crisis management Quality Assurance Specialist: Support quality monitoring, performance analysis How to Use Import the workflow into your n8n instance Configure OpenAI API credentials for all chat models Deploy the webhook for chat interactions Send support requests via chat (e.g., "Customer can't connect to our API endpoint") The Support Director will analyze and delegate to appropriate specialists Receive comprehensive support solutions and documentation Use Cases Complete Support Cycle: Inquiry triage → Resolution → Follow-up → Quality review Technical Documentation: API troubleshooting guides, integration manuals Customer Onboarding: Welcome sequences, feature tutorials, training materials Escalation Management: VIP support protocols, complaint resolution procedures Quality Monitoring: Response evaluation, team performance analytics Knowledge Base: Self-service content creation, FAQ optimization Requirements n8n instance with LangChain nodes OpenAI API access (O3 for Support Director, GPT-4.1-mini for specialists) Webhook capability for chat interactions Optional: Integration with CRM, helpdesk, or ticketing systems Cost Optimization O3 model used only for strategic Support Director decisions GPT-4.1-mini provides 90% cost reduction for specialist tasks Parallel processing enables simultaneous agent execution Solution template library reduces redundant response generation Integration Options Connect to helpdesk systems (Zendesk, Freshdesk, Intercom, etc.) Integrate with CRM platforms (Salesforce, HubSpot, etc.) Link to knowledge base systems (Confluence, Notion, etc.) Connect to monitoring tools for proactive support Building Blocks Disclaimer Important Note: This workflow is designed as a foundational building block for your customer support automation. While it provides a comprehensive multi-agent framework, you may need to customize prompts, add specific integrations, or modify agent behaviors to match your exact business requirements and support processes. Consider this a starting point that can be extended and tailored to your unique customer support needs. Contact & Resources Website: nofluff.online YouTube: @YaronBeen LinkedIn: Yaron Been Tags CustomerSupport HelpDesk TechnicalSupport CustomerSuccess SupportAutomation QualityAssurance KnowledgeManagement EscalationManagement ServiceExcellence CustomerExperience n8n OpenAI MultiAgentSystem SupportTech CX Troubleshooting CustomerCare SupportOps
Create stunning AI images directly from WhatsApp with Gemini
📱🤖 Create Stunning AI Images Directly from WhatsApp with Gemini This workflow transforms your WhatsApp into a personal AI image generation studio. Simply send a text message with your idea, and this bot will use the advanced prompt engineering capabilities of Gemini 2.5 Pro to craft a detailed, artistic prompt. It then uses Gemini 2.0 Flash to generate a high-quality image from that prompt and sends it right back to your chat. It's a powerful yet simple way to bring your creative ideas to life, all from the convenience of your favorite messaging app\! What this workflow does Listens for WhatsApp Messages: The workflow starts automatically when you send a message to your connected WhatsApp number. Enhances Your Idea with AI: It takes your basic text (e.g., "a knight on a horse") and uses Gemini 2.5 Pro to expand it into a rich, detailed prompt perfect for image generation (e.g., "A cinematic, full-body shot of a stoic knight in gleaming, ornate silver armor, riding a powerful black warhorse through a misty, ancient forest. The scene is lit by ethereal morning sunbeams piercing the canopy, creating dramatic volumetric lighting and long shadows. Photorealistic, 8K, ultra-detailed, award-winning fantasy concept art."). Generates a Unique Image: It sends this enhanced prompt to the Google Gemini 2.0 Flash image generation API. Prepares the Image: The API returns the image in Base64 format, and the workflow instantly converts it into a binary file. Sends it Back to You: The final, high-quality image is sent directly back to you in the same WhatsApp chat. Nodes Used 🟢 WhatsApp Trigger: The entry point that listens for incoming messages. 🧠 LangChain Chain (LLM): Uses Gemini 2.5 Pro for advanced prompt engineering. ➡️ HTTP Request: Calls the Google Gemini 2.0 Flash API to generate the image. 🔄 Convert to File: Converts the Base64 image data into a sendable file format. 💬 WhatsApp: Sends the final image back to the user. Prerequisites To use this workflow, you will need: An n8n instance. A WhatsApp Business Account connected to n8n. You can find instructions on how to set this up in the n8n docs. A Google Gemini API Key. You can get one for free from Google AI Studio. How to use this workflow Get your Google Gemini API Key: Visit the Google AI Studio and create a new API key. Configure the Gemini 2.5 Pro Node: In the n8n workflow, select the Gemini 2.5 Pro node. Under 'Connect your account', click 'Create New' to add a new credential. Paste your Gemini API key from the previous step and save. Configure the Generate Image (HTTP Request) Node: Select the Generate Image node. In the properties panel on the right, find the Query Parameters section. In the 'Value' field for the key parameter, replace "Your API Key" with your actual Google Gemini API Key. Connect WhatsApp: Select the WhatsApp Trigger node. Follow the instructions to connect your WhatsApp Business Account credential. If you haven't created one, the node will guide you through the process. Activate and Test: Save the workflow using the button at the top right. Activate the workflow using the toggle switch. Send a message to your connected WhatsApp number (e.g., "A futuristic city in the style of Van Gogh"). The bot will process your request and send a stunning AI-generated image right back to you\!
Predict restaurant food waste with Gemini AI and Google Sheets reporting
This automated n8n workflow performs daily forecasting of sales and raw material needs for a restaurant. By analyzing historical data and predicting future usage with AI, businesses can minimize food waste, optimize inventory, and improve operational efficiency. The forecast is stored in Google Sheets and sent via email for easy review by staff and management. What is AI Forecast Generator? The AI Forecast Generator is a machine learning component that analyzes historical sales data, weather patterns, and seasonal trends to predict future food demand and recommend optimal inventory levels to minimize waste. Good to Know AI forecasting accuracy improves over time with more historical data Weather and seasonal factors significantly impact food demand predictions Google Sheets access must be properly authorized to avoid data sync issues Email notifications help ensure timely review of daily forecasts The system works with two main data sources: historical food wastage data and predicted low-waste food requirements How It Works Daily Trigger - Initiates the workflow every day to perform food waste prediction Fetch Historical Sales Data - Reads past food usage & sales data from Google Sheets to understand trends Format Data for AI Forecasting - Cleans and organizes raw data into a structured format for AI processing AI Forecast Generator - Uses Gemini AI to forecast food demand and recommend waste reduction strategies Clean & Structure AI Output - Parses AI response into structured and actionable format for reporting Log Forecast to Google Sheets - Stores AI-generated forecast back into Google Sheets for historical tracking Create Email Summary - Creates a concise, human-friendly summary of the forecast findings Send Email Forecast Report - Delivers the forecast report via email to decision makers and management Data Sources The workflow utilizes two Google Sheets: Food Wastage Data Sheet - Contains historical data with columns: Date (date) Food Item (text) Quantity Wasted (number) Cost Impact (currency) Category (text) Reason for Waste (text) Predicted Food Data Sheet - Contains AI predictions with columns: Date (date) Food Item (text) Predicted Demand (number) Recommended Order Quantity (number) Waste Risk Level (text) Optimization Notes (text) How to Use Import the workflow into n8n Configure Google Sheets API access and authorize the application Set up email credentials for forecast report delivery Create the two required Google Sheets with the specified column structures Configure the AI model credentials (Gemini API key) Test with sample historical data to verify predictions and email delivery Adjust forecasting parameters based on your restaurant's specific needs Monitor and refine the system based on actual vs. predicted results Requirements Google Sheets API access Email service credentials (Gmail, SMTP, etc.) AI model API credentials (Gemini AI) Historical food wastage data for initial training Customizing This Workflow Modify the AI Forecast Generator prompts to focus on specific food categories, seasonal adjustments, or local market conditions. Adjust the email summary format to match your restaurant's reporting preferences and add additional data sources like supplier information or menu planning data.
Auto-generate blog posts with GPT, Leonardo AI, and Wordpress publishing
Trigger & Topic Extraction The workflow starts manually or from a chat/Telegram/webhook input. A “topic extractor” node scans the incoming text and cleans it (handles /topic … commands). If no topic is detected, it defaults to a sample news headline. Style & Structure Setup A style guide node defines the blog’s tone: practical, medium–low formality, clear sections, clean HTML only. It also enforces do’s (citations, links, actionable steps) and don’ts (no clickbait, no low-quality sources). Research & Drafting A GPT node generates a 1,700–1,800 word article following the style guide. Sections include: What happened, Why it matters, Opportunities/risks, Action plan, FAQ. The draft is then polished for clarity and flow. Quality Control A word count guard checks that the article is at least 1,600 words. If too short, a GPT “expand draft” node deepens the Why it matters, Risks, and Action plan sections. Image Creation The final article’s title and content are used to generate an editorial-style image prompt. Leonardo AI creates a cinematic, text-free featured image suitable for Google News/Discover. The image is uploaded to WordPress, with proper ALT text generated by GPT. Publishing to WordPress The final post (title, content, featured image) is automatically published. Sources are extracted from the article, compiled into a “Sources” section with clickable links. Posts are categorized and published immediately.
Financial analysis report chatbot agent with Gemini 2.5 Flash and Discord
This n8n template turns chat questions into structured financial reports using Gemini and posts them to a Discord channel via webhook. Ask about tickers, sectors, or theses (e.g., “NVDA long‑term outlook?” or “Gold ETF short‑term drivers?”) and receive a concise, shareable report. Good to know Not financial advice: Use for insights only; verify independently. Model availability can vary by region. If you see “model not found,” it may be geo‑restricted. Costs depend on model and tokens. Check current Gemini pricing for updates. Discord messages are limited to ~2000 characters per post; long reports may need splitting. Rate limits: Discord webhooks are rate‑limited; add short waits for bursts. How it works Chat Trigger collects the user’s question (public chat supported when the workflow is activated). Conversation Memory keeps a short window of recent messages to maintain context. Connect Gemini provides the LLM (e.g., gemini‑2.5‑flash‑lite) and parameters (temperature, tokens). Agent (agent1) applies a financial analysis System Message to produce structured insights. Structured Output Parser enforces a simple JSON schema: idea (one‑line thesis) + analysis (Markdown sections). Code formats a Discord‑ready Markdown report (title, question, executive summary, sections, disclaimer). Edit Fields maps the formatted report to a clean content field. Discord Webhook posts the final report to your channel. How to use Start with the built‑in Chat Trigger: click Open chat, ask a question, and verify the Discord post. Replace or augment with a Cron or Webhook trigger for scheduled or programmatic runs. For richer context, add HTTP Request nodes (prices, news, filings) and pass summaries to the agent. Requirements n8n instance with internet access Google AI (Gemini) API key Discord server with a webhook URL Customising this workflow System Message: Adjust tone, depth, risk profile, and required sections (Summary, Drivers, Risks, Metrics, Next Steps, Takeaway). Model settings: Switch models or tune temperature/tokens in Connect Gemini. Schema: Extend the parser and formatter with fields like drivers[], risks[], or metrics{}. Formatting: Edit the Code node to change headings, emojis, disclaimers, or add timestamps. Operations: Add retries, message splitting for long outputs, and rate‑limit handling for Discord.
Public holiday lookup with Nager.Date API via webhook
This n8n template empowers you to instantly fetch a list of public holidays for any given year and country using the Nager.Date API. This is incredibly useful for scheduling, planning, or integrating holiday data into various business and personal automation workflows. --- 🔧 How it works Receive Holiday Request Webhook: This node acts as the entry point, listening for incoming POST requests. It expects a JSON body containing the year (e.g., 2025) and countryCode (e.g., US for United States, PH for Philippines, DE for Germany) for which you want to retrieve public holidays. Get Public Holidays: This node makes an HTTP GET request to the Nager.Date API (date.nager.at). It dynamically uses the year and countryCode from your webhook request to query the API. The API responds with a JSON array, where each object represents a public holiday with details like its date, name, and type. Respond with Holiday Data: This node sends the full list of public holidays received from Nager.Date back to the service that initiated the webhook. --- 👤 Who is it for? This workflow is ideal for: Businesses with International Operations: Automatically check holidays for different country branches to adjust production schedules, customer service hours, or delivery estimates. HR & Payroll Departments: Accurately calculate workdays, plan leave schedules, or process payroll taking public holidays into account. Event Planners: Avoid scheduling events on public holidays, which could impact attendance or venue availability. Travel Agencies: Inform clients about holidays in their destination country that might affect local business hours or attractions. Content & Social Media Schedulers: Plan content around national holidays to maximize engagement or avoid insensitive postings. Personal Productivity & Travel Planning: Integrate holiday data into your calendar or task management tools to plan trips or personal time off more effectively. Developers: Easily integrate a reliable source of public holiday data into custom applications, dashboards, or internal tools without managing complex datasets. --- 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "year": 2025, "countryCode": "PH" // Example: "US", "DE", "GB", etc. } You can find a comprehensive list of supported country codes on the Nager.Date API documentation: https://www.nager.at/Country The workflow will return a JSON array, where each element is a holiday object, like this example for a single holiday: [ { "date": "2025-01-01", "localName": "New Year's Day", "name": "New Year's Day", "countryCode": "PH", "fixed": true, "global": true, "counties": null, "launchYear": null, "types": [ "Public" ] } // ... more holiday objects ] --- ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Holiday Request Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /public-holidays). Activate Workflow: Save and activate the workflow. --- 📝 Tips This workflow is a foundation for many powerful automations: Conditional Branching for Specific Holidays: Add an IF node after "Get Public Holidays" to check for a specific holiday (e.g., "Christmas Day"). You can then trigger different actions (e.g., send a reminder, adjust a schedule) only for that particular holiday. Filtering and Aggregating Data: Use a Filter node to only keep holidays of a certain type (e.g., "Public"). Use a Code or Function node to count the number of public holidays, or extract just the names and dates into a simpler list. Storing Holiday Data: Google Sheets/Airtable: Automatically append new holidays to a spreadsheet for easy reference or further analysis. Database: Store holiday data in a database (like PostgreSQL or MySQL) to build a custom holiday calendar application. Scheduling and Reminders: Connect this workflow to a Cron or Schedule node to run periodically (e.g., once a year at the start of the year). Use the retrieved holiday dates to set up reminders in your calendar (Google Calendar node) or send notifications (Slack, Email, SMS) a few days before an upcoming holiday. Integrate with Business Logic: Employee Leave Management: Cross-reference employee leave requests with public holidays to ensure accuracy. Automated Messages: Schedule automated "Happy Holiday" messages to customers or employees. E-commerce Shipping: Adjust estimated shipping times based on upcoming non-working days. API Key (Not needed for Nager.Date free tier): The Nager.Date API used here does not require an API key for basic public holiday lookups, which makes this template very easy to use out-of-the-box.
Generate & publish professional video ads with Veo 3, Gemini & Creatomate
Professional AI-Powered Ad Generator Creating high-quality video ads has traditionally been one of the biggest challenges for small businesses and marketers. It usually requires a production team, editing software, and days of effort before anything is ready to publish. That makes consistency almost impossible — especially if you want to publish new ads every day. This workflow changes the process entirely. Built in n8n, the Professional AI-Powered Ad Generator combines AI and automation to handle everything from script creation to video rendering and social media publishing. The goal is simple: allow you to create and distribute short, engaging video ads in minutes, without relying on a studio, editors, or complicated tools. --- Why This Workflow Matters Most businesses know the importance of consistent advertising, but producing daily video content has always felt out of reach. This workflow makes it practical by: Automating creative tasks that normally require a writer or video editor. Integrating multiple AI tools into one connected system. Publishing directly to social media so your ads reach people without extra steps. Instead of spending hours editing or outsourcing to agencies, you can now go from idea to published ad in the same amount of time it takes to write a short post. --- How It Works The process starts with a simple form where you enter your business details: name, service, offer, and call-to-action. From there, the workflow runs automatically: Script Planning AI generates a short, two-part script that feels natural and relatable. Part one introduces a problem. Part two presents your business as the solution, ending with a strong CTA. Video Generation with Google Veo 3 Each part of the script is turned into an 8-second cinematic video clip, visually optimized for vertical platforms. Character Consistency with Google Gemini Characters are analyzed and matched across both clips, creating a smooth narrative with visual continuity. Assembly with Creatomate Both clips are merged into a 16-second vertical ad, automatically formatted for TikTok, Instagram, and YouTube Shorts. Automated Publishing with Postiz The final video is scheduled and posted to your connected social accounts. Captions and titles are generated automatically. No manual uploads or extra steps. --- What You Gain ✅ Daily publishing at scale – Generate and release multiple ads per day. ⏱️ Time savings – Go from idea to ad in just minutes. 📈 Consistency – Maintain a professional, frequent social presence. 💰 Cost efficiency – No production crew, no software, no outsourcing. 🧩 Flexibility – Ideal for local businesses, service providers, and agencies. --- Honest Advantage This workflow doesn’t replace creativity — it makes it practical. You still define the message: The service The offer The story But the heavy lifting — scripting, video production, visual continuity, and publishing — is automated. You focus on ideas, while the system handles execution. --- Conclusion The Professional AI-Powered Ad Generator is not a promise of magic. It’s a dependable system that helps small teams and solo entrepreneurs consistently create and publish professional ads — without the usual bottlenecks. > From a simple brief to a polished, published ad — all in one automated flow. Advertising, simplified. Automation, scaled.
Automated structured data extract & summary via Decodo + Gemini & Google Sheets
Who this is for This workflow is designed for: Automation engineers building AI-powered data pipelines Product managers & analysts needing structured insights from web pages Researchers & content teams extracting summaries from documentation or articles HR, compliance, and knowledge teams converting unstructured web content into structured records n8n self-hosted users leveraging advanced scraping and LLM enrichment It is ideal for anyone who wants to transform any public URL into structured data + clean summaries automatically. What problem this workflow solves Web content is often unstructured, verbose, and inconsistent, making it difficult to: Extract structured fields reliably Generate consistent summaries Reuse data across spreadsheets, dashboards, or databases Eliminate manual copy-paste and interpretation This workflow solves the problem of turning arbitrary web pages into machine-readable JSON and human-readable summaries, without custom scrapers or manual parsing logic. What this workflow does The workflow integrates Decodo, Google Gemini, and Google Sheets to perform automated extraction of structured data. Here’s how it works step-by-step: Input Setup The workflow begins when the user executes it manually or passes a valid URL. The input includes url. Profile Extraction with Decodo Accepts any valid URL as input Scrapes the page content using Decodo Uses Google Gemini to: Extract structured data in JSON format Generate a concise, factual summary Cleans and parses AI-generated JSON safely Merges structured data and summary output Stores the final result in Google Sheets for reporting or downstream automation JSON Parsing & Merging The Code Node cleans and parses the JSON output from the AI for reliable downstream use. The Merge Node combines both structured data and the AI-generated summary. Data Storage in Google Sheets The Google Sheets Node appends or updates the record, storing the structured JSON and summary into a connected spreadsheet. End Output A unified, machine-readable data in JSON + an executive-level summary suitable data analysis or downstream automation. Setup Instructions Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n account with workflow editor access Decodo API credentials - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard Google Gemini (PaLM) API access Google Sheets OAuth credentials Setup Steps Import the workflow into your n8n instance. Configure Credentials Add your Decodo API credentials in the Decodo node. Connect your Google Gemini (PaLM) credentials for both AI nodes. Authenticate your Google Sheets account. Edit Input Node In the Set the Input Fields node, replace the default URL with your desired profile or dynamic data source. Run the Workflow Trigger manually or via webhook integration for automation. Verify that structured profile data and summary are written to the linked Google Sheet. How to customize this workflow to your needs You can easily extend or adapt this workflow: Modify Structured Output Change the Gemini extraction prompt to match your own JSON schema Add required fields such as authors, dates, entities, or metadata Improve Summarization Adjust summary length or tone (technical, executive, simplified) Add multi-language summarization using Gemini Change Output Destination Replace Google Sheets with: Databases (Postgres, MySQL) Notion Slack / Email File storage (JSON, CSV) Add Validation or Filtering Insert IF nodes to: Reject incomplete data Detect errors or hallucinated output Trigger alerts for malformed JSON Scale the Workflow Replace manual trigger with: Webhook Scheduled trigger Batch URL processing Summary This workflow provides a powerful, generic solution for converting unstructured web pages into structured, AI-enriched datasets. By combining Decodo for scraping, Google Gemini for intelligence, and Google Sheets for persistence, it enables repeatable, scalable, and production-ready data extraction without custom scrapers or brittle parsing logic.
Synthesize and compare multiple LLM responses with OpenRouter council
This template adapts Andrej Karpathy’s LLM Council concept for use in n8n, creating a workflow that collects, evaluates, and synthesizes multiple large language model (LLM) responses to reduce individual model bias and improve answer quality. 🎯 The gist This LLM Council workflow acts as a moderation board for multiple LLM “opinions”: The same question is answered independently by several models. All answers are anonymized. Each model then evaluates and ranks all responses. A designated Council Chairman model synthesizes a final verdict based on these evaluations. The final output includes: The original query The Chairman’s verdict The ranking of each response by each model The original responses from all models The goal is to reduce single‑model bias and arrive at more balanced, objective answers. 🧰 Use cases This workflow enables several practical applications: Receiving more balanced answers by combining multiple model perspectives Benchmarking and comparing LLM responses Exploring diverse viewpoints on complex or controversial questions ⚙️ How it works The workflow leverages OpenRouter, allowing access to many LLMs through a single API credential. In the Initialization node, you define: Council member models: Models that answer the query and later evaluate all responses Chairman model: The model responsible for synthesizing the final verdict Any OpenRouter-supported model can be used: https://openrouter.ai/models For simplicity: Input is provided via a Chat Input trigger Output is sent via an email node with a structured summary of the council’s results 👷 How to use Select the LLMs to include in your council: Council member models: Models that independently answer and evaluate the query. The default template uses: openai/gpt-4o google/gemini-2.5-flash anthropic/claude-sonnet-4.5 perplexity/sonar-pro-search Chairman model: Choose a model with a sufficiently large context window to process all evaluations and rankings. Start the Chat Input trigger. Observe the workflow execution and review the synthesized result in your chosen output channel. ⚠️ Avoid using too many models simultaneously. The total context size grows quickly (n responses + n² evaluations), which may exceed the Chairman model’s context window. 🚦 Requirements OpenRouter API access configured in n8n credentials SMTP credentials for sending the final council output by email (or replace with another output method) 🤡 Customizing this workflow Replace the Chat Input trigger with alternatives such as Telegram, email, or WhatsApp. Redirect output to other channels instead of email. Modify council member and chairman models directly in the Initialization node by updating their OpenRouter model names.