Back to Catalog

Create a company policy chatbot with RAG, Pinecone vector database, and OpenAI

Pramod RathourePramod Rathoure
1631 views
2/3/2026
Official Page

A RAG Chatbot with n8n and Pinecone Vector Database

Retrieval-Augmented Generation (RAG) allows Large Language Models (LLMs) to provide context-aware answers by retrieving information from an external vector database. In this post, we’ll walk through a complete n8n workflow that builds a chatbot capable of answering company policy questions using Pinecone Vector Database and OpenAI models.

Our setup has two main parts:

  1. Data Loading to RAG – documents (company policies) are ingested from Google Drive, processed, embedded, and stored in Pinecone.
  2. Data Retrieval using RAG – user queries are routed through an AI Agent that uses Pinecone to retrieve relevant information and generate precise answers.

1. Data Loading to RAG

This workflow section handles document ingestion. Whenever a new policy file is uploaded to Google Drive, it is automatically processed and indexed in Pinecone.

Nodes involved:

  • Google Drive Trigger
    Watches a specific folder in Google Drive. Any new or updated file triggers the workflow.

  • Google Drive (Download)
    Fetches the file (e.g., a PDF policy document) from Google Drive for processing.

  • Recursive Character Text Splitter
    Splits long documents into smaller chunks (with a defined overlap). This ensures embeddings remain context-rich and retrieval works effectively.

  • Default Data Loader
    Reads the binary document (PDF in this setup) and extracts the text.

  • OpenAI Embeddings
    Generates high-dimensional vector representations of each text chunk using OpenAI’s embedding models.

  • Pinecone Vector Store (Insert Mode)
    Stores the embeddings into a Pinecone index (n8ntest), under a chosen namespace. This step makes the policy data searchable by semantic similarity.

👉 Example flow: When HR uploads a new Work From Home Policy PDF to Google Drive, it is automatically split, embedded, and indexed in Pinecone.


2. Data Retrieval using RAG

Once documents are loaded into Pinecone, the chatbot is ready to handle user queries. This section of the workflow connects the chat interface, AI Agent, and retrieval pipeline.

Nodes involved:

  • When Chat Message Received
    Acts as the webhook entry point when a user sends a question to the chatbot.

  • AI Agent
    The core reasoning engine. It is configured with a system message instructing it to only use Pinecone-backed knowledge when answering.

  • Simple Memory
    Keeps track of the conversation context, so the bot can handle multi-turn queries.

  • Vector Store QnA Tool
    Queries Pinecone for the most relevant chunks related to the user’s question. In this workflow, it is configured to fetch company policy documents.

  • Pinecone Vector Store (Query Mode)
    Acts as the connection to Pinecone, fetching embeddings that best match the query.

  • OpenAI Chat Model
    Refines the retrieved chunks into a natural and concise answer. The model ensures answers remain grounded in the source material.

  • Calculator Tool
    Optional helper if the query involves numerical reasoning (e.g., leave calculations or benefit amounts).

👉 Example flow: A user asks “How many work-from-home days are allowed per month?”. The AI Agent queries Pinecone through the Vector Store QnA tool, retrieves the relevant section of the HR policy, and returns a concise answer grounded in the actual document.


Wrapping Up

By combining n8n automation, Pinecone for vector storage, and OpenAI for embeddings + LLM reasoning, we’ve created a self-updating RAG chatbot.

  • Data Loading pipeline ensures that every new company policy document uploaded to Google Drive is immediately available for semantic search.
  • Data Retrieval pipeline allows employees to ask natural language questions and get document-backed answers.

This setup can easily be adapted for other domains — compliance manuals, tax regulations, legal contracts, or even product documentation.

Screenshot 20250819 083747.png

Create a Company Policy Chatbot with RAG, Pinecone, and OpenAI

This n8n workflow demonstrates how to build a Retrieval-Augmented Generation (RAG) chatbot that answers questions based on company policies stored in Google Drive, leveraging Pinecone as a vector database and OpenAI for embeddings and chat capabilities.

What it does

This workflow simplifies the process of creating a knowledge base from your documents and then using that knowledge base to power an AI chatbot.

  1. Ingests Documents from Google Drive: It listens for new or updated files in a specified Google Drive folder.
  2. Loads Document Content: When a new or updated document is detected, its content is loaded.
  3. Splits Text into Chunks: The document text is then broken down into smaller, manageable chunks to optimize for vector embedding and retrieval.
  4. Generates Embeddings: OpenAI's embedding model is used to convert these text chunks into numerical vector representations.
  5. Stores Vectors in Pinecone: The generated embeddings are stored in a Pinecone vector database, making them searchable.
  6. Listens for Chat Messages: The workflow acts as a chatbot, waiting for incoming chat messages (e.g., from a user interface or another n8n workflow).
  7. Retrieves Relevant Information: When a chat message is received, the AI agent uses a "Vector Store Question Answer Tool" to query the Pinecone database and retrieve the most relevant policy documents based on the user's question.
  8. Generates Response with AI: An OpenAI Chat Model, combined with a "Simple Memory" for conversational context and a "Calculator" tool for general computations, uses the retrieved policy information to generate a comprehensive and accurate answer to the user's query.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Account: A running instance of n8n.
  • Google Drive Account: With access to the folder containing your company policy documents.
  • OpenAI API Key: For generating text embeddings and powering the chat model.
  • Pinecone Account: A configured Pinecone index to store your vector embeddings.

Setup/Usage

  1. Import the Workflow: Download the JSON provided and import it into your n8n instance.
  2. Configure Credentials:
    • Google Drive Trigger: Set up your Google Drive credential to monitor the desired folder.
    • OpenAI Embeddings: Configure your OpenAI API key credential for the Embeddings OpenAI node.
    • OpenAI Chat Model: Configure your OpenAI API key credential for the OpenAI Chat Model node.
    • Pinecone Vector Store: Set up your Pinecone credential with your API key and environment.
  3. Activate the Workflow: Once all credentials are set and configurations are in place, activate the workflow.

The workflow will now automatically process new/updated documents in your Google Drive and be ready to answer questions via the When chat message received trigger. You can connect this trigger to a chat interface (e.g., Slack, Telegram, custom web app) using other n8n nodes.

Related Templates

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

AI multi-agent executive team for entrepreneurs with Gemini, Perplexity and WhatsApp

This workflow is an AI-powered multi-agent system built for startup founders and small business owners who want to automate decision-making, accountability, research, and communication, all through WhatsApp. The “virtual executive team,” is designed to help small teams to work smarter. This workflow sends you market analysis, market and sales tips, It can also monitor what your competitors are doing using perplexity (Research agent) and help you stay a head, or make better decisions. And when you feeling stuck with your start-up accountability director is creative enough to break the barrier 🎯 Core Features 🧑‍💼 1. President (Super Agent) Acts as the main controller that coordinates all sub-agents. Routes messages, assigns tasks, and ensures workflow synchronization between the AI Directors. 📊 2. Sales & Marketing Director Uses SerpAPI to search for market opportunities, leads, and trends. Suggests marketing campaigns, keywords, or outreach ideas. Can analyze current engagement metrics to adjust content strategy. 🕵️‍♀️ 3. Business Research Director Powered by Perplexity AI for competitive and market analysis. Monitors competitor moves, social media engagement, and product changes. Provides concise insights to help the founder adapt and stay ahead. ⏰ 4. Accountability Director Keeps the founder and executive team on track. Sends motivational nudges, task reminders, and progress reports. Promotes consistency and discipline — key traits for early-stage success. 🗓️ 5. Executive Secretary Handles scheduling, email drafting, and reminders. Connects with Google Calendar, Gmail, and Sheets through OAuth. Automates follow-ups, meeting summaries, and notifications directly via WhatsApp. 💬 WhatsApp as the Main Interface Interact naturally with your AI team through WhatsApp Business API. All responses, updates, and summaries are delivered to your chat. Ideal for founders who want to manage operations on the go. ⚙️ How It Works Trigger: The workflow starts from a WhatsApp Trigger node (via Meta Developer Account). Routing: The President agent analyzes the incoming message and determines which Director should handle it. Processing: Marketing or sales queries go to the Sales & Marketing Director. Research questions are handled by the Business Research Director. Accountability tasks are assigned to the Accountability Director. Scheduling or communication requests are managed by the Secretary. Collaboration: Each sub-agent returns results to the President, who summarizes and sends the reply back via WhatsApp. Memory: Context is maintained between sessions, ensuring personalized and coherent communication. 🧩 Integrations Required Gemini API – for general intelligence and task reasoning Supabase- for RAG and postgres persistent memory Perplexity API – for business and competitor analysis SerpAPI – for market research and opportunity scouting Google OAuth – to connect Sheets, Calendar, and Gmail WhatsApp Business API – for message triggers and responses 🚀 Benefits Acts like a team of tireless employees available 24/7. Saves time by automating research, reminders, and communication. Enhances accountability and strategy consistency for founders. Keeps operations centralized in a simple WhatsApp interface. 🧰 Setup Steps Create API credentials for: WhatsApp (via Meta Developer Account) Gemini, Perplexity, and SerpAPI Google OAuth (Sheets, Calendar, Gmail) Create a supabase account at supabase Add the credentials in the corresponding n8n nodes. Customize the system prompts for each Director based on your startup’s needs. Activate and start interacting with your virtual executive team on WhatsApp. Use Case You are a small organisation or start-up that can not afford hiring; marketing department, research department and secretar office, then this workflow is for you 💡 Need Customization? Want to tailor it for your startup or integrate with CRM tools like Notion or HubSpot? You can easily extend the workflow or contact the creator for personalized support. Consider adjusting the system prompt to suite your business

ShadrackBy Shadrack
331

Auto-reply & create Linear tickets from Gmail with GPT-5, gotoHuman & human review

This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.

gotoHumanBy gotoHuman
353