β¨πMulti-AI agent chatbot for Postgres/Supabase DB and QuickCharts + tool router
Multi-AI Agent Chatbot for Postgres/Supabase Databases and QuickChart Generation Who is this for? This workflow is ideal for data analysts, developers, and business intelligence teams who need an AI-powered chatbot to query Postgres/Supabase databases and generate dynamic charts for data visualization. What problem does this solve? It simplifies data exploration by combining conversational AI with database querying and chart generation. Users can interact with their database using natural language, retrieve insights, and visualize data without manual SQL queries or chart configuration. What this workflow does AI-Powered Chat Interface: Accepts natural language prompts to query databases or generate charts. Routes user requests through a tool agent system to determine the appropriate action (query or chart). Database Querying: Executes SQL queries on Postgres/Supabase databases based on user input. Retrieves schema information, table definitions, and specific data records. Dynamic Chart Generation: Uses QuickChart to create bar charts, line charts, or other visualizations from database records. Outputs a shareable chart URL or JSON configuration for further customization. Memory Integration: Maintains chat history using Postgres memory nodes, enabling context-aware interactions. Workflow diagram showcasing AI agents, database querying, and chart generation paths. Setup Prerequisites: A Postgres-compatible database (e.g., Supabase). API credentials for OpenAI. Configuration Steps: Add your database connection credentials in the Postgres nodes. Set up OpenAI credentials for GPT-4o-mini in the language model nodes. Adjust the QuickChart schema in the "QuickChart Object Schema" node to fit your use case. Testing: Trigger the chat workflow via the "When chat message received" node. Test with prompts like "Generate a bar chart of sales data" or "Show me all users in the database." How to customize this workflow Modify AI Prompts Add Chart Types Integrate Other Tools
Extract and store YouTube video comments in Google Sheets
This n8n template demonstrates how to use the tool to crawl comments from a YouTube video and simply get all the results in a linked Google Sheet. Use cases are many: Whether you're a YouTube creator trying to understand your audience, a marketer running sample analysis, a data analyst compiling engagement metrics, or part of a growth team tracking YouTube or social media campaign performance, this workflow helps you extract real, actionable insights from YouTube video comments at scale. How It Works The workflow starts when you manually click Test Workflow or Execute Workflow in N8N. It reads the list of YouTube video URLs from the Video URLs tab in the connected YouTube β Get Video Comments Google Sheet. Only the URLs marked with the Ready status will be processed. The tool loops through each video and sends an HTTP request to the YouTube API to fetch comment data. Then, it checks whether the request is successful before continuing. If comments are found, they are split and processed. Each comment is then inserted in the Results tab of the connected YouTube β Get Video Comments Google Sheet. Once a URL has been finished, its status in the Video URLs tab of the YouTube β Get Video Comments Google Sheet is updated to Finished. How To Use Download the workflow package. Import the workflow package into your N8N interface. Duplicate the "YouTube - Get Video Comments" Google Sheet template into your Google Sheets account. Set up Google Cloud Console credentials in the following nodes in N8N, ensuring enabled access and suitable rights to Google Sheets and YouTube services: For Google Sheets access, ensure each node is properly connected to the correct tab in your connected Google Sheet template: Node Google Sheets - Get Video URLs β connected to the Video URLs tab; Node Google Sheets - Insert/Update Comment β connected to the Results tab; Node Google Sheets - Update Status connected to the Video URLs tab. For YouTube access: Set up a GET method in Node HTTP Request - Get Comments. Open the template in your Google Sheets account. In the tab Video URLs, fill in the video URLs you want to crawl in Column B and update the status for each row in Column A to Ready. Return to the N8N interface and click Execute Workflow. Check the results in the Results tab of the template - the collected comments will appear there. Requirements Basic setup in Google Cloud Console (OAuth or API Key method enabled) with enabled access to YouTube and Google Sheets. How To Customize By default, the workflow is manually triggered in N8N. However, you can automate the process by adding a Google Sheets trigger that monitors new entries in your connected YouTube β Get Video Comments template and starts the workflow automatically. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
Analyze images from forms using GPT-4o-mini Vision and deliver to Telegram
This workflow analyzes images submitted via a form using OpenAI Vision, then delivers the analysis result directly to your Telegram chat. β Use case examples: β’ Users submit screenshots for instant AI interpretation β’ Automated document or receipt analysis with Telegram delivery β’ Quick OCR or image classification workflows βΈ» βοΈ Setup Guide Form Submission Trigger β’ Connect your form app (e.g. Typeform, Tally, or n8nβs own webhook form) to the On form submission trigger node. β’ Ensure it sends the image file or URL as input. OpenAI Vision Analysis β’ In the OpenAI node, select Analyze Image operation. β’ Provide your OpenAI API key and configure the prompt to instruct the model on what to analyze (e.g. βDescribe this receipt in detailβ). Set Telegram Chat ID β’ Use this manual node to input your Telegram Chat ID for delivery. β’ Alternatively, automate this with a database lookup or user session if building for multiple users. Telegram Delivery Node β’ Connect your Telegram Bot to n8n using your bot token. β’ Set up the sendMessage operation, using the analysis result from the previous node as the message text. Testing β’ Click Execute workflow. β’ Submit an image via your form and confirm it delivers to your Telegram as expected.
Create a Telegram bot with Mistral Nemotron AI and conversation memory
π€ Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. π§ How it works The workflow creates an intelligent Telegram bot that: π¬ Maintains conversation history for each user π§ Provides contextual AI responses using any AI API service π± Handles different message types and commands π Manages chat sessions with clear functionality π Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) βοΈ Set up steps π Prerequisites π€ Telegram Bot Token (from @BotFather) π AI API Key (from any AI service provider) π n8n instance with webhook capability π οΈ Configuration Steps π€ Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup π§ Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service π Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections π Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings β¨ Features π Core Functionality π¨ Smart Message Routing: Automatically categorizes incoming messages (commands, text, non-text) π§ Conversation Memory: Maintains chat history for each user (last 10 messages) π€ AI-Powered Responses: Integrates with any AI API service for intelligent replies β‘ Command Support: Built-in /start and /clear commands π± Message Types Handled π¬ Text Messages: Processed through AI model with context π§ Commands: Special handling for bot commands β Non-text Messages: Polite error message for unsupported content πΎ Memory Management π€ User-specific chat history storage π Automatic history trimming (keeps last 10 messages) π Global state management across workflow executions π€ Bot Commands /start π― - Welcome message with bot introduction /clear ποΈ - Clears conversation history for fresh start Regular text π¬ - Processed by AI with conversation context π§ Technical Details ποΈ Workflow Structure π‘ Telegram Trigger - Receives all incoming messages π Message Filtering - Routes messages based on type/content πΎ History Management - Maintains conversation context π§ AI Processing - Generates intelligent responses π€ Response Delivery - Sends formatted replies back to user π€ AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words π Easy to Replace with Any AI Service: OpenAI Example: json { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: json { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: json { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } π‘οΈ Error Handling β Non-text message detection and appropriate responses π§ API failure handling β οΈ Invalid command processing π¨ Customization Options π€ AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: π Change the URL in HTTP Request node π§ Update the request body format in "Prepare API Request" node π Update authentication method if needed π Adjust response parsing in "Save AI Response to History" node π§ AI Behavior π Modify system prompt in "Prepare API Request" node π‘οΈ Adjust temperature and response parameters π Change response length limits π― Customize model-specific parameters πΎ Memory Settings π Adjust history length (currently 10 messages) π€ Modify user identification logic ποΈ Customize data persistence approach π Bot Personality π Update welcome message content β οΈ Customize error messages and responses β Add new command handlers π‘ Use Cases π§ Customer Support: Automated first-line support with context awareness π Educational Assistant: Homework help and learning support π₯ Personal AI Companion: General conversation and assistance πΌ Business Assistant: FAQ handling and information retrieval π¬ AI API Testing: Perfect template for testing different AI services π Prototype Development: Quick AI chatbot prototyping π Notes π Requires active n8n instance for webhook handling π° AI API usage may have rate limits and costs (varies by provider) πΎ Bot memory persists across workflow restarts π₯ Supports multiple concurrent users with separate histories π Template is provider-agnostic - easily switch between AI services π οΈ Perfect starting point for any AI-powered Telegram bot project π§ Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | π’ OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | π΅ Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | π΄ Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | π‘ NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | π Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | π£ Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!
Export Cloudflare domains with DNS records and settings to Google Sheets
How it works This workflow simply exports all your CloudFlare domains to Google Sheet to get high overview of all of your settings. This could help for easy debugging, searching or similar needs. In flow simple pagging nodes are used to iterate over all your domains, because this list could be huge. For each host we are merging DNS & Settings and transforming them into columns for all our domains. Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/:account/api-tokens) (need full access) Google Spreadsheet auth connected in your n8n Credentials Google Spreadsheet template - you can copy my sheet as starting point, start by copying it to your account Match Sheet ID in 'Export' node to your newly created. Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Potential API timeouts If you encounter CF API timeouts - I would suggest to only put somewhere in the loop simple sleep/wait node - for couple of seconds - and it should resolve timeouts. Google Sheet I've used simple Google Sheet feature conditional formatting to visually distinct my on|off toggles that was of my interest to easily get high overview for debuggint some of the settings on my hosts - but please use your own logic or change it completely.
Parse CVs from emails with OCR & GPT for Notion database
It allows you to automate candidate retrieval and onboarding in your HR processes. How it works It monitors a Gmail address for new emails with a PDF attachment It expects the PDF to be a candidateβs CV, extracts the text using OCR, and then structures the data using ChatGPT Once the data is processed, it connects to Notion and adds (or updates) an entry in the specified database How to use Configure your Gmail account and provide your ChatGPT API key Provide an API key for the OCR service in a variable named OCRSPACEAPI_KEY Connect your Notion account Once everything is configured, the workflow will monitor your inbox for new emails. Just send an email with a PDF attachment to the configured address Requirements In addition to Gmail, ChatGPT, and Notion, the system uses a third-party OCR API (OCR SPACE). Youβll need to create an account and obtain an API key You must map the fields returned by ChatGPT to the Notion database, or use the same field names we are using Customising It should be easy to replace Notion with PostgreSQL or another database if needed
SOL trading recommendations w/ multi-timeframe analysis using Gemini & Telegram
Try It Out! This workflow builds a Telegram-based Solana (SOL/USDT) Multi-Timeframe AI Market Analyzer that automatically pulls live candlestick data for SOL/USDT, runs structured multi-timeframe technical analysis (1-minute, 5-minute, 1-hour) through an AI Agent, and posts a professional, JSON-structured analysis + trading recommendation straight to your Telegram chat. It combines on-chain / market data aggregation, LLM-driven interpretation, and instant Telegram reporting β giving you concise, actionable market intelligence every hour. How It Works Hourly Trigger β The workflow runs once per hour to pull fresh market data. Market Data Fetch β Three HTTP requests gather candlesticks from CryptoCompare: 1-minute (last 60 candles) 5-minute aggregate (last 60 aggregated candles) 1-hour (last 60 candles) Merge & Transcribe β The three feeds are merged and a lightweight code node extracts: symbol, current price, arrays for data1m, data5m, data_1h. AI Agent Analysis β The LLM (configured via your model node) receives the merged payload and runs a structured multi-timeframe technical analysis, returning a strict JSON report containing: Per-timeframe analysis (momentum, volume, S/R,MA, volatility) Market structure / confluence findings Trading recommendation (action, entry, stop, TPs, position sizing) A final disclaimer Parse AI Output β Extracts the JSON block from the agentβs reply and validates/parses it for downstream formatting. Telegram Reporting β Sends two nicely formatted Telegram messages: Multi-timeframe breakdown (1m / 5m / 1h) Market structure + Trading Recommendation (TPs, SL, position size, disclaimer) How to Use Import the workflow into your n8n workspace (or replicate the nodes shown in the JSON). Add credentials: CryptoCompare API Key β for reliable candlestick data. LLM model credentials β e.g., Google Gemini / OpenAI, configured in the LangChain/LM node. Telegram Bot Token & Chat ID β to send messages. (Optional) AFK Crypto API key if you want to enrich data with wallet info later. Node mapping & endpoints: Fetch_1m β GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60 Fetch_5m β GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60&aggregate=5 Fetch_1h β GET https://min-api.cryptocompare.com/data/v2/histohour?fsym=SOL&tsym=USDT&limit=60 Merge β combine the three responses into a single payload. Transcribe (code) β extract last close as current price and attach the arrays. AI Agent β pass the structured prompt (system message instructs exact JSON structure). Parse AI Output β extract the json ... block and JSON.parse it. Telegram nodes β format and send two messages (timeframes and recommendation). Adjust analysis frequency: default is hourly β change the Schedule Trigger node as desired. Deploy and activate: the workflow will post an AI-driven SOL/USDT market analysis to your Telegram hourly. (Optional) Extend This Workflow Add price / orderbook enrichment (e.g., AFK price endpoints or exchange orderbook) to improve context. Add wallet exposure checks (AFK wallet balances) to tailor position sizing suggestions. Store AI reports in Notion / Google Sheets for historical auditing and backtesting. Add alert filtering to only post when the LLM flags high-confidence signals or confluence across timeframes. Expose Telegram commands to request on-demand analysis (e.g., /analyze now 5m). Add risk management logic to convert LLM recommendation into automated orders (careful β requires manual review and stronger safety controls). Safety Mechanisms Explicit system prompt β forces AI to output only the exact JSON structure to avoid free-form text parsing errors. JSON parser node β validates the agent response and throws if malformed before any downstream action. Read-only market analysis β the workflow only reports by default (no auto-trading), reducing operational risk. Credentials gated β ensure LLM and Telegram credentials are stored securely in n8n. Disclaimer β every report includes a legal/financial disclaimer from the agent. Requirements CryptoCompare API Key (for minute/hour candlesticks) LLM model credentials (Google Gemini / OpenAI / other supported model in your LangChain node) Telegram Bot Token + Chat ID (where analysis messages are posted) Optional: AFK Crypto API key if you plan to add wallet/position context n8n instance with: HTTP Request, Code, Merge, LangChain/Agent, and Telegram nodes enabled AFK / External APIs Used CryptoCompare Candles: GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60 (1m) GET https://min-api.cryptocompare.com/data/v2/histominute?fsym=SOL&tsym=USDT&limit=60&aggregate=5 (5m) GET https://min-api.cryptocompare.com/data/v2/histohour?fsym=SOL&tsym=USDT&limit=60 (1h) Telegram Bot API β via n8n Telegram node. LLM / LangChain β your chosen LLM provider (configured in the workflow). Summary The Solana (SOL/USDT) Multi-Timeframe AI Market Analyzer (Telegram) gives you hourly, professional multi-timeframe technical analysis generated by an LLM agent using real candlestick data from CryptoCompare. It combines the speed of automated data collection with the structure and reasoning of an AI analyst, delivering clear trading recommendations and a timestamped analysis to your Telegram chat β ideal for traders who want reliable, concise market intelligence without manual charting. --- Our Website: https://afkcrypto.com/ Check our blogs: https://www.afkcrypto.com/blog
Generate news cards from Spotify emotions with LLM, Google News and APITemplate.io
π Workflow Overview Title: Spotify Emotion-to-News Card Generator (APITemplate.io + Slack) What it does: This workflow analyzes the emotion of your recently played Spotify track using OpenRouter (LLM), fetches a related trending Google News article, generates a visual news card with APITemplate.io, and posts it to Slack. π₯ Whoβs it for Music lovers, marketers, and developers who want to automatically turn their listening mood into a visual daily digest or Slack update. βοΈ How it works Spotify Trigger β Fetch your recently played tracks. LLM (Emotion Analyzer) β Infer the main emotion from the track title and artist. Google News Query β Build an RSS URL based on the emotion keyword. RSS Reader β Retrieve trending news headlines. APITemplate.io β Render the top article into an image card. Slack β Post title, link, and card image into your channel. π§° Requirements Spotify API credentials OpenRouter API key APITemplate.io account (with template ID) Slack OAuth2 connection πͺ How to customize Replace the APITemplate.io template ID with your own. Adjust the RSS URL language (hl=en-US β hl=ja-JP for Japanese news). Modify the Slack message text for your preferred channel tone. β οΈ Disclaimer If you use community nodes (LangChain), this template is for self-hosted n8n only.
Automated LinkedIn lead generation & AI personalized outreach with Apollo & Instantly
π LinkedIn Scraper + Apollo + Apify + AI Outreach Workflow A fully automated, endβtoβend B2B lead generation and AIβpowered outreach system built using n8n, Apollo, Apify, OpenAI, Tavily, Google Sheets, and Instantly.ai. This workflow transforms a simple form submission (job titles, company size, keywords, and location) into a complete salesβready outreach pipeline. π What This Workflow Does This automation: Collects targeting criteria via an n8n form (Job Title, Keywords, Location, Company Size, Campaign ID). Generates a fully structured Apollo search URL using an LLM. Triggers an Apify Apollo Scraper Actor and retrieves rich lead data. Cleans, normalizes, and structures each lead using OpenAI. Uploads validated lead data to Google Sheets, ensuring no duplicates. Runs deep company research using Tavily to retrieve: Company overview Website product/service descriptions Recent website news or blog posts Thirdβparty sentiment from G2/Reddit Synthesizes all company information into a comprehensive summary. Generates a personalized cold email body using OpenAI, tailored to each lead. Uploads each lead + personalized message into an Instantly.ai campaign. Loops through all leads automatically until campaign is fully populated. π§ Why This Workflow Is Powerful Removes 100% of manual scraping. Creates highβquality, personalized outreach at scale. Ensures every lead has: Verified email Company insights Personalized messaging Produces higher reply rates using contextual relevance. Fully modular β replace models, adjust prompts, or add CRM integrations. π οΈ Ideal Use Cases Agency founders running outbound campaigns for clients. SaaS founders targeting specific industries. B2B marketers wanting automated lead feeds. SDR teams scaling multistep personalized outreach. β‘ Final Result A continuous, automated pipeline that: Scrapes leads Enriches them Researches their companies Generates personalized messages Adds them to Instantly campaigns β all triggered by a single form submission.