IT support chatbot with Google Drive, Pinecone & Gemini | AI doc processing
This n8n template empowers IT support teams by automating document ingestion and instant query resolution through a conversational AI. It integrates Google Drive, Pinecone, and a Chat AI agent (using Google Gemini/OpenRouter) to transform static support documents into an interactive, searchable knowledge base. With two interlinked workflows—one for processing support documents and one for handling chat queries—employees receive fast, context-aware answers directly from your support documentation. Overview Document Ingestion Workflow Google Drive Trigger: Monitors a specified folder for new file uploads (e.g., updated support documents). File Download & Extraction: Automatically downloads new files and extracts text content. Data Cleaning & Text Splitting: Utilizes a Code node to remove line breaks, trim extra spaces, and strip special characters, while a text splitter segments the content into manageable chunks. Embedding & Storage: Generates text embeddings using Google Gemini and stores them in a Pinecone vector store for rapid similarity search. Chat Query Workflow Chat Trigger: Initiates when an employee sends a support query. Vector Search & Context Retrieval: Retrieves the top relevant document segments from Pinecone based on similarity scores. Prompt Construction: A Code node combines the retrieved document snippets with the user’s query into a detailed prompt. AI Agent Response: The constructed prompt is sent to an AI agent (using OpenRouter Chat Model) to generate a clear, step-by-step solution. Key Benefits & Use Case Imagine a large organization where every IT support document—from troubleshooting guides to system configurations—is stored in a single Google Drive folder. When an employee encounters an issue (e.g., “How do I reset my VPN credentials?”), they simply type the query into a chat interface. Instantly, the workflow retrieves the most relevant context from the ingested documents and provides a detailed, actionable answer. This process reduces resolution times, enhances support consistency, and significantly lightens the load on IT staff. Prerequisites A valid Google Drive account with access to the designated folder. A Pinecone account for storing and retrieving text embeddings. Google Gemini (or OpenRouter) credentials to power the Chat AI agent. An operational n8n instance configured with the necessary nodes and credentials. Workflow Details 1 Document Ingestion Workflow Google Drive Trigger Node: Listens for file creation events in the specified folder. Google Drive Download Node: Downloads the newly added file. Extract from File Node: Extracts text content from the downloaded file. Code Node (Data Cleaning): Cleans the extracted text by removing line breaks, trimming spaces, and eliminating special characters. Recursive Text Splitter Node: Segments the cleaned text into manageable chunks. Pinecone Vector Store Node: Generates embeddings (via Google Gemini) and uploads the chunks to Pinecone. 2 Chat Query Workflow Chat Trigger Node: Receives incoming user queries. Pinecone Vector Store Node (Query): Searches for relevant document chunks based on the query. Code Node (Context Builder): Sorts the retrieved documents by relevance and constructs a prompt merging the context with the query. AI Agent Node: Sends the prompt to the Chat AI agent, which returns a detailed answer. How to Use Import the Template: Import the template into your n8n instance. Configure the Google Drive Trigger: Set the folder ID (e.g., 1RQvAHIw8cQbtwI9ZvdVV0k0x6TM6H12P) and connect your Google Drive credentials. Set Up Pinecone Nodes: Enter your Pinecone index details and credentials. Configure the Chat AI Agent: Provide your Google Gemini (or OpenRouter) API credentials. Test the Workflows: Validate the document ingestion workflow by uploading a sample support document. Validate the chat query workflow by sending a test query and verifying the returned support information. Additional Notes Ensure all credentials (Google Drive, Pinecone, and Chat AI) are correctly set up and tested before deploying the workflows in production. The template is fully customizable. Adjust the text cleaning, splitting parameters, or the number of document chunks retrieved based on your support documentation's size and structure. This template not only enhances IT support efficiency but also offers a scalable solution for managing and leveraging growing volumes of support content.
Download recently liked songs automatically with Spotify
Purpose This workflow enables you to listen to your recent favorites in very hight quality offline without sacrificing all of your storage. How it works This workflow automatically creates a playlist in Spotify named "Downloads" which periodically gets updated so it always contains only a defined amount of the latest liked songs. This enables only the Downloads playlist to set for automatic downloading and thus free up space on the device. Setup The workflow is ready to go. Just select your Spotify credentials and activate the workflow. In Spotify just enable automatic downloads on the automatically created Downloads folder after the first workflow run. Current limitations This setup currently supports a maximum of 50 songs in the Downloads Playlist. This is due to the paylod limits defined by Spotify encountered in the Get liked songs node. Implementing batching would solve the issue.
E-commerce assistant for Shopify & WooCommerce with GPT-4o, Gemini & RAG
🤖 Universal E-Commerce AI Assistant (Shopify, WooCommerce & RAG) This powerful n8n workflow deploys a sophisticated, multi-talented AI chatbot designed to streamline your e-commerce and customer support operations. The AI assistant can intelligently understand user queries and route them to the correct specialized agent, whether it's for Shopify, WooCommerce, or general knowledge questions answered by a Retrieval-Augmented Generation (RAG) system. This template automates responses to a wide range of inquiries, from checking Shopify order statuses with GraphQL to fetching product lists from WooCommerce, and even answering general questions by looking up information in a Pinecone vector database. How It Works ⚙️ The workflow operates in a series of logical steps, starting from the moment a user sends a message. 💬 Chat Trigger: The workflow activates when a user sends a message in the n8n chat interface. It captures the user's input and a unique session ID to track the conversation. 🧠 Intelligent Routing: The user's query is first sent to a Router Agent powered by GPT-4o-mini. This agent's sole purpose is to classify the intent of the message and output one of three keywords: SHOPIFY, WOOCOMMERCE, or None of them. 🔀 Conditional Branching: Based on the Router's output, a series of IF nodes direct the conversation down one of three paths: General Queries Path Shopify Path WooCommerce Path 📚 General Queries (RAG): If the query is not about e-commerce, it's handled by a RAG agent. Embedding: The user's question is converted into a vector embedding using AWS Bedrock. Retrieval: The workflow searches a Pinecone Vector Store to find the most relevant information from your knowledge base. Generation: A GPT-4o-mini agent receives the context from Pinecone and generates a comprehensive, helpful answer. 🛍️ E-Commerce Specialists: If the query is about Shopify or WooCommerce, it's passed to a dedicated agent. Shopify Agent: This agent uses Google Gemini and has a suite of tools to manage Shopify tasks. It can Get Order info, Fetch All Products, or run complex queries using the powerful GraphQL tool. WooCommerce Agent: This agent also uses Google Gemini and is equipped with tools to Fetch Order Details and Fetch All Products from a WooCommerce store. 🗣️ Conversation Memory: Each agent (Router, General, Shopify, WooCommerce) is connected to its own Memory node. This allows the chatbot to remember previous parts of the conversation for a more natural and context-aware interaction. 🏁 Merge & Respond: All three paths converge at a final Merge node. This ensures that no matter which agent handled the request, the final answer is streamlined into a single output and sent back to the user in the chat. Nodes Used 🔗 Triggers: Chat Trigger: Starts the workflow when a chat message is received. AI & Agents: AI Agent: Four separate agents for Routing, Shopify, WooCommerce, and General Queries. OpenAI Chat Model: Uses GPT-4o-mini for the Router and General Queries agent. Google Gemini Chat Model: Uses Google Gemini for the Shopify and WooCommerce agents. Tools & Data: Shopify Tool: To get products and order information from Shopify. WooCommerce Tool: To get products and order information from WooCommerce. GraphQL Tool: For advanced, custom queries to the Shopify API. Pinecone Vector Store: To retrieve context for the RAG agent. AWS Bedrock Embeddings: To create vector embeddings for Pinecone. Logic & Memory: IF Node: To conditionally route the workflow. Merge Node: To consolidate the different branches before ending. Window Buffer Memory: Four nodes to provide conversational memory to each agent. Setup Guide 🛠️ To use this workflow, you'll need to configure several nodes with your own credentials and settings. 1\. AI Model Credentials OpenAI: Create an API key in your OpenAI Platform dashboard. Add this credential to the Router Model and GPT-4o-mini nodes. Google Gemini: Create an API key in your Google AI Studio dashboard. Add this credential to the Shopify Chat Model and WooCommerce Chat Model nodes. 2\. E-Commerce Platform Credentials Shopify: You will need a Shopify Access Token. Follow the n8n documentation to generate one. Add the credential to the Fetch All Products and Get Order info nodes. WooCommerce: Create API credentials from your WordPress dashboard. Add the credential to the Fetch All Products2 and Fetch Order Details nodes. 3\. RAG System Credentials (Pinecone & AWS) Pinecone: Sign up for a Pinecone account and create an API key. Add your Pinecone credentials in n8n. In the Pinecone Vector Store node, set the pineconeIndex to the name of your index. You must have a pre-existing index with data for the RAG to work. AWS: Create an AWS account and an IAM user with programmatic access to Amazon Bedrock. Add your AWS credentials in n8n. Select your AWS credentials in the AWS Bedrock Embeddings node. 4\. GraphQL Node Configuration In the GraphQL node, you must update the endpoint URL. Replace the placeholder https://{subdomain}.myshopify.com/admin/api/2025-04/graphql.json with your own Shopify store's GraphQL API endpoint.
Creators hub: Generate dynamic SVG stats with daily updates
n8n Creators Template: Creator Profile Stats Updater This n8n workflow template is designed to automate the process of updating a creator's profile statistics, including total workflows, complex workflows, approved workflows, pending workflows, total nodes, and total views. It utilizes various nodes to fetch data, process it, and update a SVG file hosted on GitHub to reflect the latest stats. Workflow Overview Schedule Trigger: Triggers the workflow execution at specified intervals. Config: Sets up configuration details like creator username, colors for text, icons, border, and card. Get Workflows: Fetches workflows associated with the creator from the n8n API. Workflows Data: Processes the fetched data to calculate various statistics. Get User: Fetches user details from the n8n API. Download Image: Downloads the creator's profile image. Extract From File: Extracts binary data from the downloaded image file. SVG: Generates an SVG file with updated stats and visual representation. GitHub: Commits the updated SVG file to the specified GitHub repository. Final: Prepares the final data set for further processing or output. Sticky Note: Provides a visual note or reminder within the workflow editor. Embed & Live Preview Since it's a .SVG format you can host it anywhere. treat it like normal image so you can embed it with any site, forum, page that support posting images. here's example code for markdown: markdown [](https://n8n.io/creators/n8n-team) Here's the result [](https://n8n.io/creators/n8n-team) Or served through CDN & Cache [](https://n8n.io/creators/n8n-team) Setup Instructions GitHub Credentials: Ensure you have GitHub credentials set up in your n8n instance to allow the workflow to commit changes to your repository. Configure Trigger: Adjust the Schedule Trigger node to set the desired execution intervals for the workflow. Set Configuration: Customize the Config node with your GitHub username and preferred aesthetic options for the SVG. Deploy Workflow: Import the workflow into your n8n instance and deploy it. Customization Options Text and Icon Colors: Customize the colors used in the SVG by modifying the respective fields in the Config node. Profile Image Size: Adjust the image size in the Download Image node URL if needed. Commit Messages: Modify the commit messages in the GitHub nodes to suit your version control conventions [I've used $now funaction to include current time in message which will gives allways a diffrent commit value]. Requirements n8n (Self-hosted or Cloud version compatible with 2024 releases and up) GitHub account and repository Basic understanding of n8n workflow configuration Support and Contributions For support, please refer to the n8n community forum or the official n8n documentation. Contributions to the template can be made you're allowed to reuse this workflow and reshare with edit (like new design/colors etc..) under your name.
🛠️ Re-Access Binary Data from Any Previous Node
How it works Ever had binary data (like images, PDFs, or files) disappear in your n8n workflow after an intermediate node processed it? This workflow provides a powerful solution by demonstrating how to re-access and re-attach binary data from any previous node, even if it was dropped along the way. Think of it like having a reliable backup copy of your file always available, no matter what happens to the original as it moves through your workflow. Here's how this template works step-by-step: Initial Binary Fetch: The workflow starts by fetching a binary image (the n8n logo) from a URL using an HTTP Request node. This is our original binary data. Simulated Data Loss: A Set node then processes this data. Crucially, by default, Set nodes (and many others) do not pass binary data to subsequent nodes. This step intentionally simulates a common scenario where your binary data might seem to "disappear" from the workflow's output. Re-Access and Re-Attach: The core of the solution is a Code node. It uses a specific n8n expression ($(nodeName).item) to reach back to the original node that produced the binary data (Get n8n Logo (Binary)). It then retrieves that binary data and uses this.helpers.prepareBinaryData() to correctly re-attach it to the current item, making it available for all subsequent nodes. Set up steps Setup time: 0 minutes! This is a self-contained tutorial workflow, so no external accounts or credentials are required. Simply click the "Execute Workflow" button to run it. Observe the output of the Re-Access Binary Data from Previous Node to see the binary data successfully re-attached. Important for Customization: If you adapt this technique to your own workflows, remember to update the previousNodeName variable within the Re-Access Binary Data from Previous Node (Code node) to match the exact name of the node that originally produced the binary data you wish to retrieve.
Analyze Telegram messages with OpenAI and send notifications via Gmail & Telegram
AI-powered Telegram message analysis with multi-tool notifications (Gmail, Telegram) This workflow triggers on Telegram updates, analyzes messages with an AI Agent using MCP tools, and sends notifications via Gmail and Telegram. Detailed Description Who is this for? This template is for teams, businesses, or individuals using Telegram for communication who need automated, AI-driven insights and notifications. It’s ideal for customer support teams, project managers, or tech enthusiasts wanting to process Telegram messages intelligently and receive alerts via Gmail and Telegram. What problem is this workflow solving? Use case This workflow solves the challenge of manually monitoring Telegram messages by automating message analysis and notifications. For example, a support team can use it to analyze customer queries on Telegram with AI tools (OpenAI, Airbnb, Brave, FireCrawl) and get notified via Gmail and Telegram for quick responses. What this workflow does The workflow: Triggers on a Telegram update (e.g., a new message) using the Listen for Telegram Updates node. Processes the message with the Analyze Message with AI node, an AI Agent using MCP tools like OpenAI Chat, Airbnb search, Brave search, and FireCrawl. Sends notifications via the Send Gmail Notification and Send Telegram Alert nodes, including AI-generated insights. Setup Prerequisites: Telegram bot token for the trigger and notification nodes. Gmail API credentials for sending emails. API keys for OpenAI, Airbnb, Brave, and FireCrawl (used in the AI Agent). Steps: Configure the Listen for Telegram Updates node with your Telegram bot token. Set up the Analyze Message with AI node with your OpenAI API key and other tool credentials. Configure the Send Gmail Notification node with your Gmail credentials. Set up the Send Telegram Alert node with your Telegram bot token. Test by sending a Telegram message to trigger the workflow. Setup takes ~15-30 minutes. Detailed instructions are in sticky notes within the workflow. How to customize this workflow to your needs Add more AI tools (e.g., sentiment analysis) in the Analyze Message with AI node. Modify the notification message in the Send Gmail Notification and Send Telegram Alert nodes to include specific AI outputs. Add nodes for other channels like Slack or SMS after the AI Agent. Disclaimer This workflow uses Community nodes (e.g., Airbnb, Brave, FireCrawl), which are available only in self-hosted n8n instances. Ensure your n8n setup supports Community nodes before using this template.
Track regional sentiment from social media with Bright Data & OpenAI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks regional sentiment across social media and news outlets, giving you a real-time pulse on how people in a specific area feel about your brand or topic. Overview The automation queries Twitter, Reddit, and major news APIs filtered by geolocation. Bright Data handles location-specific scraping where APIs are limited. OpenAI performs sentiment and keyword extraction, aggregating scores into a daily report stored in Google Sheets and visualized in Data Studio. Tools Used n8n – Coordinates all steps Bright Data – Collects geo-targeted data beyond API limits OpenAI – Runs sentiment analysis and topic modeling Google Sheets – Houses cleaned sentiment metrics Data Studio / Looker – Optional dashboard for visualization How to Install Import the Workflow into n8n with the provided .json. Configure Bright Data credentials. Set Up OpenAI API key. Connect Google Sheets and create a destination spreadsheet. Customize Regions & Keywords in the Start node. Use Cases Brand Monitoring: Measure public opinion in target markets. Political Campaigns: Gauge voter sentiment by district. Market Entry: Understand regional attitudes before launching. Crisis Management: Detect negative spikes early. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) n8n automation sentimentanalysis geolocation brightdata openai sociallistening n8nworkflow nocode brandmonitoring
Categorize Revolut transactions automatically with GPT-4 and Supabase
Revolut Extracts Analyzer This n8n template processes Revolut statements, normalizes transactions, and uses AI to categorize expenses automatically. Use cases include detecting subscriptions, separating internal transfers, and building dashboards to track spending. --- How it works Get Categories from Supabase Download & Transform Loop Over Items LLM Categorizer Insert into Supabase --- How to use Start with the manual trigger node or replace it with a schedule/webhook. Connect Google Drive to provide Revolut CSV files. Ensure Supabase has tables for transactions and categories. Extend with notifications, reports, or BI tools. --- Requirements Google Drive for CSV files Supabase tables for categories & transactions LLM provider (OpenAI/Gemini)
Track Upwork jobs from Vollna RSS with Google Sheets logging and Slack alerts
Who’s it for Freelancers and agencies who track new Upwork leads via Vollna RSS and want clean logging to Google Sheets with instant Slack alerts. What it does Polls a Vollna RSS feed every few minutes, parses each job (title, budget, link, skills, categories), dedupes against your sheet, appends only new jobs, and notifies Slack with a compact alert. How it works Schedule Trigger fires on an interval. RSS Read pulls new items from Vollna. Filter (optional) skips non-ASCII titles. Code node normalizes fields (title/budget split, clean Upwork link, “Posted x mins ago”, etc.). Sheets Lookup + Compare prevents duplicates by job URL. Sheets Append writes new rows; Slack posts a job alert. Set up In ⚙️ Config, set: VOLLNARSSURL, GOOGLESHEETSDOCID, GOOGLESHEETNAME, SLACKCHANNELID, EMAILTO (optional). Add OAuth credentials for Google Sheets, Slack, and Gmail (optional). Create sheet columns: TITLE, BUDGET, UPWORK JOB LINK, CATEGORIES, SKILLS, DATE, JOB DESCRIPTION, POSTED. (Optional) Adjust polling interval on the Schedule Trigger. Requirements • Vollna RSS feed URL (your tokenized link) • n8n (cloud or self-hosted) with Google Sheets + Slack creds Customize • Remove the ASCII filter for broader coverage. • Swap Gmail/Slack with your preferred notifier. • Add keyword filters before appending to Sheets.