Build a customer support RAG agent with GPT-5, Telegram & Pinecone
🧠 RAG-Based Customer Support Agent (GPT-5 + Telegram) Description:
This workflow builds a powerful Retrieval-Augmented Generation (RAG) Customer Support Agent that interacts with users directly through Telegram using the GPT-5 model. It combines real-time conversational capabilities with context-aware responses by leveraging vector search via Pinecone, making it ideal for automated, intelligent support systems.
Watch Video Tutorial Build on Workflows Like These: https://www.youtube.com/@Automatewithmarc
💬 Key Features:
Telegram Integration: Listens to customer queries via the Telegram Trigger node and sends back intelligent responses in the same chat. GPT-5 Agent (LangChain): A powerful AI agent node orchestrates the conversation using OpenAI's GPT-5 model. Contextual Memory: A Memory Buffer stores the last 15 interactions per user to provide more personalized and coherent multi-turn conversations.
RAG with Pinecone: Integrates with Pinecone to fetch relevant answers from your “Customer FAQ” vector namespace, enabling grounded and accurate responses. Embeddings Generation: Uses OpenAI’s Embeddings node to process and vectorize documents for retrieval. End-to-End AI Pipeline: Connects all components from input to output, providing seamless and intelligent customer support.
🔧 Tech Stack:
GPT-5 via OpenAI API Pinecone vector store (namespace: Customer FAQ) Telegram Bot API LangChain agent, memory, and embedding tools n8n self-hosted or cloud instance
📌 Ideal Use Cases:
Automated customer support for e-commerce, SaaS, or community support FAQ bots with up-to-date product or policy documents Multilingual support agents (customizable via GPT-5)
🛠️ Setup Instructions:
Set up your Telegram bot and insert credentials. Add your OpenAI and Pinecone API keys. Upload or index your support documents into the Customer FAQ namespace on Pinecone. Deploy and test your Telegram bot.
Customer Support RAG Agent with GPT and Pinecone (Telegram Integration)
This n8n workflow automates a customer support RAG (Retrieval Augmented Generation) agent, allowing it to answer user queries from Telegram by leveraging a Pinecone vector store and an OpenAI GPT model. The agent is designed to provide intelligent and context-aware responses based on a knowledge base stored in Pinecone.
What it does
This workflow streamlines customer support interactions by:
- Listening for Telegram Messages: It acts as a Telegram bot, waiting for incoming messages from users.
- Processing User Queries with an AI Agent: When a message is received, it triggers an AI Agent (powered by LangChain) to process the user's query.
- Retrieving Context from Pinecone: The AI Agent utilizes a Pinecone Vector Store to retrieve relevant information from a predefined knowledge base based on the user's query. This is augmented by OpenAI Embeddings to understand the semantic meaning of the query and the stored data.
- Generating Responses with OpenAI Chat Model: The retrieved context, along with the user's query, is then fed into an OpenAI Chat Model (GPT) to generate a comprehensive and accurate response.
- Maintaining Conversation History: A simple memory buffer is used to keep track of the conversation, allowing the AI agent to provide more coherent and contextually relevant responses over time.
- Sending Responses via Telegram: The generated response from the AI agent is sent back to the user in the Telegram chat.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running instance of n8n.
- Telegram Bot Token: A Telegram bot token obtained from BotFather.
- OpenAI API Key: An API key for OpenAI to access their embedding and chat models (e.g., GPT-3.5 or GPT-4).
- Pinecone Account and API Key: An account with Pinecone and its corresponding API key and environment details.
- Pinecone Index: A pre-populated Pinecone index containing your customer support knowledge base, with embeddings generated using the same OpenAI embeddings model configured in the workflow.
Setup/Usage
- Import the Workflow: Download the provided JSON and import it into your n8n instance.
- Configure Telegram Trigger:
- Select your Telegram Bot credential.
- Ensure the "Updates" are set to listen for
messages.
- Configure Telegram Node:
- Select the same Telegram Bot credential.
- The "Chat ID" should be set dynamically to
{{ $json.chat.id }}to reply to the originating chat. - The "Text" field should be set dynamically to
{{ $('AI Agent').item.json.response }}to send the AI agent's response.
- Configure OpenAI Embeddings:
- Select your OpenAI API credential.
- Choose the appropriate embedding model (e.g.,
text-embedding-ada-002).
- Configure Pinecone Vector Store:
- Select your Pinecone API credential.
- Provide your Pinecone environment and the name of your pre-existing Pinecone index.
- Configure OpenAI Chat Model:
- Select your OpenAI API credential.
- Choose your desired chat model (e.g.,
gpt-3.5-turboorgpt-4).
- Configure Simple Memory:
- This node manages the conversation history. You can adjust the
kparameter to control how many previous messages are remembered.
- This node manages the conversation history. You can adjust the
- Configure AI Agent:
- Ensure the "Tools" are connected to the "Pinecone Vector Store" node.
- Ensure the "Language Model" is connected to the "OpenAI Chat Model" node.
- Ensure the "Memory" is connected to the "Simple Memory" node.
- The "Input" should be set dynamically to
{{ $json.text }}to feed the Telegram message into the agent.
- Activate the Workflow: Once all credentials and configurations are set, activate the workflow.
Your Telegram bot will now be ready to answer customer support queries using your Pinecone knowledge base and OpenAI's intelligence!
Related Templates
AI multi-agent executive team for entrepreneurs with Gemini, Perplexity and WhatsApp
This workflow is an AI-powered multi-agent system built for startup founders and small business owners who want to automate decision-making, accountability, research, and communication, all through WhatsApp. The “virtual executive team,” is designed to help small teams to work smarter. This workflow sends you market analysis, market and sales tips, It can also monitor what your competitors are doing using perplexity (Research agent) and help you stay a head, or make better decisions. And when you feeling stuck with your start-up accountability director is creative enough to break the barrier 🎯 Core Features 🧑💼 1. President (Super Agent) Acts as the main controller that coordinates all sub-agents. Routes messages, assigns tasks, and ensures workflow synchronization between the AI Directors. 📊 2. Sales & Marketing Director Uses SerpAPI to search for market opportunities, leads, and trends. Suggests marketing campaigns, keywords, or outreach ideas. Can analyze current engagement metrics to adjust content strategy. 🕵️♀️ 3. Business Research Director Powered by Perplexity AI for competitive and market analysis. Monitors competitor moves, social media engagement, and product changes. Provides concise insights to help the founder adapt and stay ahead. ⏰ 4. Accountability Director Keeps the founder and executive team on track. Sends motivational nudges, task reminders, and progress reports. Promotes consistency and discipline — key traits for early-stage success. 🗓️ 5. Executive Secretary Handles scheduling, email drafting, and reminders. Connects with Google Calendar, Gmail, and Sheets through OAuth. Automates follow-ups, meeting summaries, and notifications directly via WhatsApp. 💬 WhatsApp as the Main Interface Interact naturally with your AI team through WhatsApp Business API. All responses, updates, and summaries are delivered to your chat. Ideal for founders who want to manage operations on the go. ⚙️ How It Works Trigger: The workflow starts from a WhatsApp Trigger node (via Meta Developer Account). Routing: The President agent analyzes the incoming message and determines which Director should handle it. Processing: Marketing or sales queries go to the Sales & Marketing Director. Research questions are handled by the Business Research Director. Accountability tasks are assigned to the Accountability Director. Scheduling or communication requests are managed by the Secretary. Collaboration: Each sub-agent returns results to the President, who summarizes and sends the reply back via WhatsApp. Memory: Context is maintained between sessions, ensuring personalized and coherent communication. 🧩 Integrations Required Gemini API – for general intelligence and task reasoning Supabase- for RAG and postgres persistent memory Perplexity API – for business and competitor analysis SerpAPI – for market research and opportunity scouting Google OAuth – to connect Sheets, Calendar, and Gmail WhatsApp Business API – for message triggers and responses 🚀 Benefits Acts like a team of tireless employees available 24/7. Saves time by automating research, reminders, and communication. Enhances accountability and strategy consistency for founders. Keeps operations centralized in a simple WhatsApp interface. 🧰 Setup Steps Create API credentials for: WhatsApp (via Meta Developer Account) Gemini, Perplexity, and SerpAPI Google OAuth (Sheets, Calendar, Gmail) Create a supabase account at supabase Add the credentials in the corresponding n8n nodes. Customize the system prompts for each Director based on your startup’s needs. Activate and start interacting with your virtual executive team on WhatsApp. Use Case You are a small organisation or start-up that can not afford hiring; marketing department, research department and secretar office, then this workflow is for you 💡 Need Customization? Want to tailor it for your startup or integrate with CRM tools like Notion or HubSpot? You can easily extend the workflow or contact the creator for personalized support. Consider adjusting the system prompt to suite your business
Automate RSS to social media pipeline with AI, Airtable & GetLate for multiple platforms
Overview Automates your complete social media content pipeline: sources articles from Wallabag RSS, generates platform-specific posts with AI, creates contextual images, and publishes via GetLate API. Built with 63 nodes across two workflows to handle LinkedIn, Instagram, and Bluesky—with easy expansion to more platforms. Ideal for: Content marketers, solo creators, agencies, and community managers maintaining a consistent multi-platform presence with minimal manual effort. How It Works Two-Workflow Architecture: Content Aggregation Workflow Monitors Wallabag RSS feeds for tagged articles (to-share-linkedin, to-share-instagram, etc.) Extracts and converts content from HTML to Markdown Stores structured data in Airtable with platform assignment AI Generation & Publishing Workflow Scheduled trigger queries Airtable for unpublished content Routes to platform-specific sub-workflows (LinkedIn, Instagram, Bluesky) LLM generates optimized post text and image prompts based on custom brand parameters Optionally generates AI images and hosts them on Imgbb CDN Publishes via GetLate API (immediate or draft mode) Updates Airtable with publication status and metadata Key Features: Tag-based content routing using Wallabag's native system Swappable AI providers (Groq, OpenAI, Anthropic) Platform-specific optimization (tone, length, hashtags, CTAs) Modular design—duplicate sub-workflows to add new platforms in \~30 minutes Centralized Airtable tracking with 17 data points per post Set Up Steps Setup time: \~45-60 minutes for initial configuration Create accounts and get API keys (\~15 min) Wallabag (with RSS feeds enabled) GetLate (social media publishing) Airtable (create base with provided schema—see sticky notes) LLM provider (Groq, OpenAI, or Anthropic) Image service (Hugging Face, Fal.ai, or Stability AI) Imgbb (image hosting) Configure n8n credentials (\~10 min) Add all API keys in n8n's credential manager Detailed credential setup instructions in workflow sticky notes Set up Airtable database (\~10 min) Create "RSS Feed - Content Store" base Add 19 required fields (schema provided in workflow sticky notes) Get Airtable base ID and API key Customize brand prompts (\~15 min) Edit "Set Custom SMCG Prompt" node for each platform Define brand voice, tone, goals, audience, and image preferences Platform-specific examples provided in sticky notes Configure platform settings (\~10 min) Set GetLate account IDs for each platform Enable/disable image generation per platform Choose immediate publish vs. draft mode Adjust schedule trigger frequency Test and deploy Tag test articles in Wallabag Monitor the first few executions in draft mode Activate workflows when satisfied with the output Important: This is a proof-of-concept template. Test thoroughly with draft mode before production use. Detailed setup instructions, troubleshooting tips, and customization guidance are in the workflow's sticky notes. Technical Details 63 nodes: 9 Airtable operations, 8 HTTP requests, 7 code nodes, 3 LangChain LLM chains, 3 RSS triggers, 3 GetLate publishers Supports: Multiple LLM providers, multiple image generation services, unlimited platforms via modular architecture Tracking: 17 metadata fields per post, including publish status, applied parameters, character counts, hashtags, image URLs Prerequisites n8n instance (self-hosted or cloud) Accounts: Wallabag, GetLate, Airtable, LLM provider, image generation service, Imgbb Basic understanding of n8n workflows and credential configuration Time to customize prompts for your brand voice Detailed documentation, Airtable schema, prompt examples, and troubleshooting guides are in the workflow's sticky notes. Category Tags social-media-automation, ai-content-generation, rss-to-social, multi-platform-posting, getlate-api, airtable-database, langchain, workflow-automation, content-marketing
Automated YouTube video uploads with 12h interval scheduling in JST
This workflow automates a batch upload of multiple videos to YouTube, spacing each upload 12 hours apart in Japan Standard Time (UTC+9) and automatically adding them to a playlist. ⚙️ Workflow Logic Manual Trigger — Starts the workflow manually. List Video Files — Uses a shell command to find all .mp4 files under the specified directory (/opt/downloads/单词卡/A1-A2). Sort and Generate Items — Sorts videos by day number (dayXX) extracted from filenames and assigns a sequential order value. Calculate Publish Schedule (+12h Interval) — Computes the next rounded JST hour plus a configurable buffer (default 30 min). Staggers each video’s scheduled time by order × 12 hours. Converts JST back to UTC for YouTube’s publishAt field. Split in Batches (1 per video) — Iterates over each video item. Read Video File — Loads the corresponding video from disk. Upload to YouTube (Scheduled) — Uploads the video privately with the computed publishAtUtc. Add to Playlist — Adds the newly uploaded video to the target playlist. 🕒 Highlights Timezone-safe: Pure UTC ↔ JST conversion avoids double-offset errors. Sequential scheduling: Ensures each upload is 12 hours apart to prevent clustering. Customizable: Change SPANHOURS, BUFFERMIN, or directory paths easily. Retry-ready: Each upload and playlist step has retry logic to handle transient errors. 💡 Typical Use Cases Multi-part educational video series (e.g., A1–A2 English learning). Regular content release cadence without manual scheduling. Automated YouTube publishing pipelines for pre-produced content. --- Author: Zane Category: Automation / YouTube / Scheduler Timezone: JST (UTC+09:00)