MCP Supabase server for AI agent with RAG & multi-tenant CRUD
Supabase AI Agent with RAG & Multi-Tenant CRUD Version: 1.0.0 n8n Version: 1.88.0+ Author: Koresolucoes License: MIT --- Description A stateful AI agent workflow powered by Supabase and Retrieval-Augmented Generation (RAG). Enables persistent memory, dynamic CRUD operations, and multi-tenant data isolation for AI-driven applications like customer support, task orchestration, and knowledge management. Key Features: 🧠 RAG Integration: Leverages OpenAI embeddings and Supabase vector search for context-aware responses. 🗃️ Full CRUD: Manage agentmessages, agenttasks, agentstatus, and agentknowledge in real time. 📤 Multi-Tenant Ready: Supports per-user/organization data isolation via dynamic table names and webhooks. 🔒 Secure: Role-based access control via Supabase Row Level Security (RLS). --- Use Cases Customer Support Chatbots: Persist conversation history and resolve queries using institutional knowledge. Automated Task Management: Track and update task statuses dynamically. Knowledge Repositories: Store and retrieve domain-specific information for AI agents. --- Instructions Import Template Go to n8n > Templates > Import from File and upload this workflow. Configure Credentials Add your Supabase and OpenAI API keys under Settings > Credentials. Set Up Multi-Tenancy (Optional) Dynamic Webhook Path: Replace the default webhook path with /mcp/tool/supabase/:userId to enable per-user routing. Table Names: Use a Set Node to dynamically generate table names (e.g., agentmessages{{userId}}). Activate & Test Enable the workflow and send test requests to the webhook URL. --- Tags AI Agent RAG Supabase CRUD Multi-Tenant OpenAI Automation --- Screenshots --- License This template is licensed under the MIT License.
Upsert huge documents in a vector store with Supabase and Notion
Purpose This workflow adds the capability to build a RAG on living data. In this case Notion is used as a Knowledge Base. Whenever a page is updated, the embeddings get upserted in a Supabase Vector Store. It can also be fairly easily adapted to PGVector, Pinecone, or Qdrant by using a custom HTTP request for the latter two. Demo [](https://youtu.be/ELAxebGmspY) How it works A trigger checks every minute for changes in the Notion Database. The manual polling approach improves accuracy and prevents changes from being lost between cached polling intervals. Afterwards every updated page is processed sequentially The Vector Database is searched using the Notion Page ID stored in the metadata of each embedding. If old entries exist, they are deleted. All blocks of the Notion Database Page are retrieved and combined into a single string The content is embedded and split into chunks if necessary. Metadata, including the Notion Page ID, is added during storage for future reference. A simple Question and Answer Chain enables users to ask questions about the embedded content through the integrated chat function Prerequisites To setup a new Vector Store in Supabase, follow this guide Prepare a simple Database in Notion with each Database Page containing at least a title and some content in the blocks section. You can of course also connect this to an existing Database of your choice. Setup Select your credentials in the nodes which require those If you are on an n8n cloud plan, switch to the native Notion Trigger by activating it and deactivating the Schedule Trigger along with its subsequent Notion Node Choose your Notion Database in the first Node related to Notion Adjust the chunk size and overlap in the Token Splitter to your preference Activate the workflow How to use Populate your Notion Database with useful information and use the chat mode of this workflow to ask questions about it. Updates to a Notion Page should quickly reflect in future conversations.
AI personal assistant for Google Tasks
Who’s it for This template is designed for anyone who wants to use Telegram as a personal AI assistant hub. If you often juggle tasks, emails, calendars, and expenses across multiple tools, this workflow consolidates everything into one seamless AI-powered agent. What it does Jarvis listens to your Telegram messages (text or audio) and processes them with OpenAI. Based on your request, it can:
Scrape Google Maps by area & generate outreach messages for lead generation
This n8n workflow automates lead extraction from Google Maps, enriches data with AI, and stores results for cold outreach. It uses the Bright Data community node and Bright Data MCP for scraping and AI message generation. How it works Form Submission User provides Google Maps starting location, keyword and country. Bright Data Scraping Bright Data community node triggers a Maps scraping job, monitors progress, and downloads results. AI Message Generation Uses Bright Data MCP with LLMs to create a personalized cold call script and talking points for each lead. Database Storage Enriched leads and scripts are upserted to Supabase. How to use Set up all the credentials, create your Postgres table and submit the form. The rest happens automatically. Requirements LLM account (OpenAI, Gemini…) for API usage. Bright Data account for API and MCP usage. Supabase account (or other Postgres database) to store information.
Analyze images from forms using GPT-4o-mini Vision and deliver to Telegram
This workflow analyzes images submitted via a form using OpenAI Vision, then delivers the analysis result directly to your Telegram chat. ✅ Use case examples: • Users submit screenshots for instant AI interpretation • Automated document or receipt analysis with Telegram delivery • Quick OCR or image classification workflows ⸻ ⚙️ Setup Guide Form Submission Trigger • Connect your form app (e.g. Typeform, Tally, or n8n’s own webhook form) to the On form submission trigger node. • Ensure it sends the image file or URL as input. OpenAI Vision Analysis • In the OpenAI node, select Analyze Image operation. • Provide your OpenAI API key and configure the prompt to instruct the model on what to analyze (e.g. “Describe this receipt in detail”). Set Telegram Chat ID • Use this manual node to input your Telegram Chat ID for delivery. • Alternatively, automate this with a database lookup or user session if building for multiple users. Telegram Delivery Node • Connect your Telegram Bot to n8n using your bot token. • Set up the sendMessage operation, using the analysis result from the previous node as the message text. Testing • Click Execute workflow. • Submit an image via your form and confirm it delivers to your Telegram as expected.
Track real-time stock prices with Yahoo Finance, ScrapegraphAI, and Google Sheets
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This automated workflow monitors stock prices by scraping real-time data from Yahoo Finance. It uses a scheduled trigger to run at specified intervals, extracts key stock metrics using AI-powered extraction, formats the data through a custom code node, and automatically saves the structured information to Google Sheets for tracking and analysis. Key Steps: Scheduled Trigger: Runs automatically at specified intervals to collect fresh stock data AI-Powered Scraping: Uses ScrapeGraphAI to intelligently extract stock information (symbol, current price, price change, change percentage, volume, and market cap) from Yahoo Finance Data Processing: Formats extracted data through a custom Code node for optimal spreadsheet compatibility and handles both single and multiple stock formats Automated Storage: Saves all stock data to Google Sheets with proper column mapping for easy filtering, analysis, and historical tracking Set up steps Setup Time: 5-10 minutes Configure Credentials: Set up your ScrapeGraphAI API key and Google Sheets OAuth2 credentials Customize Target: Update the website URL in the ScrapeGraphAI node to your desired stock symbol (currently set to AAPL) Configure Schedule: Set your preferred trigger frequency (daily, hourly, etc.) for stock price monitoring Map Spreadsheet: Connect to your Google Sheets document and configure column mapping for the stock data fields Pro Tips: Keep detailed configuration notes in the sticky notes within the workflow Test with a single stock first before scaling to multiple stocks Consider modifying the Code node to handle different stock symbols or add additional data fields Perfect for building a historical database of stock performance over time Can be extended to track multiple stocks by modifying the ScrapeGraphAI prompt
Generate AI images & videos with KIE.AI Midjourney API
Generate AI Images & Videos with KIE.AI Midjourney API Overview Generate high-quality AI images and videos using KIE.AI's Midjourney API through an intuitive form interface. This n8n workflow supports three distinct content creation modes, delivering professional results with automated processing and real-time progress monitoring. Supported Modes: Text-to-Image (mj_txt2img): Generate original images from written descriptions Image-to-Image (mj_img2img): Transform or enhance existing images with AI Image-to-Video (mj_video): Animate still images into short video clips Users interact only through a simple form interface, requiring no coding skills. After submitting a request, the system automatically calls the KIE.AI API, monitors progress in real-time, and retrieves the final output once ready. Perfect for Content creators, designers, marketers, and developers who need to quickly generate diverse AI visual content with automated processing and professional quality results. Prerequisites KIE.AI Account: Sign up at KIE.AI to obtain your free or paid API key. n8n Instance: Active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. AI Prompt Knowledge: Basic understanding of AI prompts for optimal generation results. Reference Images (Optional): Publicly accessible image URLs for image-to-image or image-to-video generation. Quick Setup Get API Key Register at KIE.AI and generate your API key. Store it securely and never share it publicly. Configure Form Fields Set up these fields in your "On Form Submission" node: tasktype (Required): Select generation mode mj_txt2img for text-to-image generation mj_img2img for image-to-image generation mj_video for image-to-video generation prompt (Required): Text description for your content generation imgurl (Optional): Image URL array for image-to-image or image-to-video generation Important: Leave empty for text-to-image generation (mj_txt2img) api_key (Required): Your KIE.AI API key for authentication Test & Use Click "Execute Workflow" in n8n. Access the generated form URL. Fill in your details and submit. Wait for processing (workflow polls every 10 seconds). Results will be displayed automatically. Customization Tips Write Detailed Prompts: Include specific details for better results: Style: realistic, anime, cinematic, watercolor, oil painting Composition: close-up, wide shot, portrait, landscape Lighting: dramatic, soft, neon, natural, studio Subject Details: actions, scenes, movements, visual elements Example Enhanced Prompt: "Cinematic portrait of a cyberpunk character with neon blue lighting, close-up composition, dramatic shadows, futuristic mood" Leverage Multiple Modes: Start with text-to-image for initial concepts. Use image-to-image to refine and enhance results. Apply image-to-video to animate your best images. Combine modes for complex creative workflows. Common Use Cases: Social media content creation. Marketing material development. Product visualization. Storyboarding and prototyping. Creative asset generation. Troubleshooting API Key Invalid: Verify your key is correct and active. Generation Fails: Check prompt length and content appropriateness. Slow Processing: Video generation can take 2-5 minutes; this is normal. Image URL Issues: Ensure URLs are publicly accessible and properly formatted. Keywords: KIE.AI API, AI image generation, AI video generation, text-to-image, image-to-video, automated workflows, n8n template, AI content creation
YouTube trends finding agent
Who this is for This workflow is for content creators, digital marketers, or YouTube strategists who want to automatically discover trending videos in their niche, analyze engagement metrics, and get data-driven insights for their content strategy — all from one simple form submission. What this workflow does This workflow starts every time someone submits the YouTube Trends Finder Form. It then: Searches YouTube videos based on your topic and specified time range using the YouTube Data API. Fetches detailed analytics (views, likes, comments, engagement rates) for each video found. Calculates engagement rates and filters out low-performing content (below 2% engagement). Applies smart filters to exclude videos with less than 1000 views, content outside your timeframe, and hashtag-heavy titles. Removes duplicate videos to ensure clean data. Creates a Google Spreadsheet with all trending video data organized by performance metrics. Delivers the results via a completion form with a direct link to your analytics report. Setup To set this workflow up: Form Trigger – Customize the "YouTube Trends Finder" form fields if needed (Topic Name, Last How Many Days). YouTube Data API – Add your YouTube OAuth2 credentials and API key in the respective nodes. Google Sheets – Connect your Google Sheets account for automatic report generation. Engagement Filters – Adjust the 2% engagement rate threshold based on your quality standards. View Filters – Modify the minimum view count (currently 1000+) in the filter conditions. Regional Settings – Update the region code (currently "US") to target specific geographic markets. How to customize this workflow to your needs Change the engagement rate threshold to be more or less strict based on your niche requirements. Add additional filters like video duration, subscriber count, or specific keywords to refine results. Modify the Google Sheets structure to include extra metrics like "Channel Name", "Video Duration", or "Trending Score". Switch to different output formats like CSV export or direct email reports instead of Google Sheets.
Create social media videos with Sora 2 AI for marketing & content creation
Overview This workflow utilizes the Defapi API with Sora 2 AI model to generate stunning viral videos with creative AI-generated motion, effects, and storytelling. Simply provide a creative prompt describing your desired video scene, and optionally upload an image as a reference. The AI generates professional-quality video content perfect for tiktok, youtube, marketing campaigns, and creative projects. Input: Creative prompt (required) + optional image Output: AI-generated viral video ready for social media and content marketing Users can interact through a simple form, providing a text prompt describing the desired video scene and optionally uploading an image for context. The system automatically submits the request to the Defapi Sora 2 API, monitors the generation status in real time, and retrieves the final video output. This solution is ideal for content creators, social media marketers, video producers, and businesses who want to quickly generate engaging video content with minimal setup. Prerequisites A Defapi account and API key: Sign up at Defapi.org to obtain your API key for Sora 2 access. An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. Basic knowledge of AI prompts for video generation to achieve optimal results. Example prompt: A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants. For 15-second HD videos, prefix your prompt with (15s,hd). (Optional) An image to use as a reference or starting point for video generation. Image Restrictions: Avoid uploading images with real people or highly realistic human faces, as they will be rejected during content review. Important Notes: The API requires proper authentication via Bearer token for all requests. Content undergoes multi-stage moderation. Avoid violence, adult content, copyrighted material, and living celebrities in both prompts and images. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key with Sora 2 access. Store it securely—do not share it publicly. Configure Credentials: In n8n, create HTTP Bearer Auth credentials named "Defapi account" with your API key. Configure the Form: In the "Upload Image" form trigger node, ensure the following fields are set up: Prompt (text field, required) - Describe the video scene you want to generate Image (file upload, optional) - Optionally upload .jpg, .png, or .webp image files as reference Test the Workflow: Click "Execute Workflow" in n8n to activate the form trigger. Access the generated form URL and enter your creative video prompt. Optionally upload an image for additional context. The workflow will process any uploaded image through the "Convert to JSON" node, converting it to base64 format. The request is sent to the Sora 2 API endpoint at Defapi.org. The system will wait 10 seconds and then poll the API status until video generation is complete. Handle Outputs: The final "Format and Display Results" node formats and displays the generated video URL for download or embedding. Workflow Structure The workflow consists of the following nodes: Upload Image (Form Trigger) - Collects user input: creative prompt (required) and optional image file Convert to JSON (Code Node) - Converts any uploaded image to base64 data URI and formats prompt Send Sora 2 Generation Request to Defapi.org API (HTTP Request) - Submits video generation request to Sora 2 API Wait for Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API task query endpoint for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Results (Set Node) - Extracts and formats final video URL output Technical Details API Endpoint: https://api.defapi.org/api/sora2/gen (POST request) Model Used: Sora 2 AI video generation model Video Capabilities: Supports 15-second videos and high-definition (HD) output Status Check Endpoint: https://api.defapi.org/api/task/query (GET request) Wait Time: 10 seconds between status checks Image Processing: If an image is uploaded, it is converted to base64 data URI format (data:image/[type];base64,[data]) for API submission Authentication: Bearer token authentication using the configured Defapi account credentials Request Body Format: json { "prompt": "Your video description here", "images": ["data:image/jpeg;base64,..."] } Note: The images array can contain an image or be empty if no image is provided Response Format: The API returns a task_id which is used to poll for completion status. Final result contains data.result.video with the video URL. Accepted Image Formats: .jpg, .png, .webp Specialized For: Viral video content, social media videos, creative video marketing Customization Tips Enhance Prompts: Include specifics like: Scene description and action sequences Character behaviors and emotions Camera movements and angles (e.g., slow-motion, dramatic zoom) Audio/music style (e.g., dramatic, upbeat, cinematic) Visual effects and atmosphere Timing and pacing details Enable 15s and HD Output: To generate 15-second high-definition videos, start your prompt with (15s,hd). For example: (15s,hd) A pack of dogs driving tiny cars in a high-speed chase through a city... Content Moderation The API implements a three-stage content review process: Image Review: Rejects images with real people or highly realistic human faces Prompt Filtering: Checks for violence, adult content, copyrighted material, and living celebrities Output Review: Final check after generation (often causes failures at 90%+ completion) Best Practices: Avoid real human photos; use illustrations or cartoons instead Keep prompts generic; avoid brand names and celebrity names You can reference verified Sora accounts (e.g., "let @sama dance") If generation fails at 90%+, simplify your prompt and try again Example Prompts "A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants." "(15s,hd) Animated fantasy landscape with floating islands, waterfalls cascading into clouds, magical creatures flying, golden sunset lighting, epic orchestral music." "(15s,hd) Product showcase with 360-degree rotation, dramatic lighting changes, particle effects, modern electronic background music." Use Cases Social Media Content: Generate eye-catching videos for Instagram Reels, TikTok, and YouTube Shorts Marketing Campaigns: Create unique promotional videos from product images Creative Projects: Transform static images into dynamic storytelling videos Content Marketing: Produce engaging video content without expensive production costs Viral Content Creation: Generate shareable, attention-grabbing videos for maximum engagement
Create AI-ready vector datasets for LLMs with Bright Data, Gemini & Pinecone
Who this is for? This workflow enables automated, scalable collection of high-quality, AI-ready data from websites using Bright Data’s Web Unlocker, with a focus on preparing that data for LLM training. Leveraging LLM Chains and AI agents, the system formats and extracts key information, then stores the structured embeddings in a Pinecone vector database. This workflow is tailored for: ML Engineers & Researchers building or fine-tuning domain-specific LLMs. AI Startups needing clean, structured content for product training. Data Teams preparing knowledge bases for enterprise-grade AI apps. LLM-as-a-Service Providers sourcing dynamic web content across niches. What problem is this workflow solving? Training a large language model (LLM) requires vast amounts of clean, relevant, and structured data. Manual collection is slow, error-prone, and lacks scalability. This workflow: Automatically extracts web data from specified URLs. Bypasses anti-bot measures using Bright Data’s Web Unlocker. Formats, cleans, and transforms raw content using LLM agents. Stores semantically searchable vectors in Pinecone. Makes datasets AI-ready for fine-tuning, RAG, or domain-specific training. What this workflow does This workflow automates the process of collecting, cleaning, and vectorizing web content to create structured, high-quality datasets that are ready to be used for LLM (Large Language Model) training or retrieval-augmented generation (RAG). Web Crawling with Bright Data Web Unlocker. AI Information Extraction and Data Formatting. AI Data Formatting to produce a JSON structured data. Persistence in Pinecone Vector DB. Handle Webhook notification of structured data. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Set Fields - URL and Webhook URL node with the URL for web data extraction and the Webhook notification URL. How to customize this workflow to your needs Set Your Target URLs. Target sites that are high-quality, domain-specific, and relevant to your LLM's purpose. Adjust Bright Data Web Unlocker Settings. Geo-location, Headers / User-Agent strings, Retry rules and proxies. Modify the Information Extraction Logic. Change prompts to extract specific attributes. Use structured templates or few-shot examples in prompts. Swap the Embedding Model. Use OpenAI, Hugging Face or other your own hosted embedding model API. Customize Pinecone Metadata Fields. Store extra fields in Pinecone for better filtering & semantic querying. Add Data Validation or Deduplication. Skip duplicates or low-quality content.
Automated company research & lead enrichment with GPT-4o and Google Sheets
Supercharge your sales and marketing with this AI-powered workflow! 🚀 Stop wasting hours on manual company research. This template deploys an autonomous AI agent that takes a list of company names from a Google Sheet, scours the web to find critical information, and automatically updates your sheet with the enriched data. What it does: Reads a list of companies to research from a Google Sheet. Uses an AI agent equipped with Google Search and web scraping tools. Extracts key data points like LinkedIn URLs, pricing details, integrations, market focus (B2B/B2C), and more. Structures the output into a clean JSON object. Updates the original Google Sheet with the new information. Key Features & Customization: This workflow is built to be easily customized. You can modify the AI's prompt in the "AI company researcher" node and adjust the "Structured Output Parser" to gather any public data point you need, such as recent news, key executives, or their technology stack. Required Credentials: OpenAI Google Sheets SerpApi or ScrapingBee for search capabilities
Handle GDPR data deletion requests with Slack
This workflow automatically deletes user data from different apps/services when a specific slash command is issued in Slack. Watch this talk and demo to learn more about this use case. The demo uses Slack, but Mattermost is Slack-compatible, so you can also connect Mattermost in this workflow. Prerequisites Accounts and credentials for the apps/services you want to use. Some basic knowledge of JavaScript. Nodes Webhook node triggers the workflow when a Slack slash command is issued. IF nodes confirm Slack's verification token and verify that the data has the expected format. Set node simplifies the payload. Switch node chooses the correct path for the operation to perform. Respond to Webhook nodes send responses back to Slack. Execute Workflow nodes call sub-workflows tailored to deleting data from each individual service. Function node, Crypto node, and Airtable node generate and store a log entry containing a hash value. HTTP Request node sends the final response back to Slack.