Interactive knowledge base chat with Supabase RAG using AI 📚💬
Google Drive File Ingestion to Supabase for Knowledge Base 📂💾 Overview 🌟 This n8n workflow automates the process of ingesting files from Google Drive into a Supabase database, preparing them for a knowledge base system. It supports text-based files (PDF, DOCX, TXT, etc.) and tabular data (XLSX, CSV, Google Sheets), extracting content, generating embeddings, and storing data in structured tables. This is a foundational workflow for building a company knowledge base that can be queried via a chat interface (e.g., using a RAG workflow). 🚀 Problem Solved 🎯 Manually managing a knowledge base with files from Google Drive is time-consuming and error-prone. This workflow solves that by: Automatically ingesting files from Google Drive as they are created or updated. Extracting content from various file types (text and tabular). Generating embeddings for text-based files to enable vector search. Storing data in Supabase for efficient retrieval. Handling duplicates and errors to ensure data consistency. Target Audience: Knowledge Managers: Build a centralized knowledge base from company files. Data Teams: Automate the ingestion of spreadsheets and documents. Developers: Integrate with other workflows (e.g., RAG for querying the knowledge base). Workflow Description 🔍 This workflow listens for new or updated files in Google Drive, processes them based on their type, and stores the extracted data in Supabase tables for later retrieval. Here’s how it works: File Detection: Triggers when a file is created or updated in Google Drive. File Processing: Loops through each file, extracts metadata, and validates the file type. Duplicate Check: Ensures the file hasn’t been processed before. Content Extraction: Text-based Files: Downloads the file, extracts text, splits it into chunks, generates embeddings, and stores the chunks in Supabase. Tabular Files: Extracts data from spreadsheets and stores it as rows in Supabase. Metadata Storage: Stores file metadata and basic info in Supabase tables. Error Handling: Logs errors for unsupported formats or duplicates. Nodes Breakdown 🛠️ Detect New File 🔔 Type: Google Drive Trigger Purpose: Triggers the workflow when a new file is created in Google Drive. Configuration: Credential: Google Drive OAuth2 Event: File Created Customization: Specify a folder to monitor specific directories. Detect Updated File 🔔 Type: Google Drive Trigger Purpose: Triggers the workflow when a file is updated in Google Drive. Configuration: Credential: Google Drive OAuth2 Event: File Updated Customization: Currently disconnected; reconnect if updates need to be processed. Process Each File 🔄 Type: Loop Over Items Purpose: Processes each file individually from the Google Drive trigger. Configuration: Input: {{ $json.files }} Customization: Adjust the batch size if processing multiple files at once. Extract File Metadata 🆔 Type: Set Purpose: Extracts metadata like fileid, filename, mimetype, and webview_link. Configuration: Fields: file_id: {{ $json.id }} file_name: {{ $json.name }} mime_type: {{ $json.mimeType }} webviewlink: {{ $json.webViewLink }} Customization: Add more metadata fields if needed (e.g., size, createdTime). Check File Type ✅ Type: IF Purpose: Validates the file type by checking the MIME type. Configuration: Condition: mime_type contains supported types (e.g., application/pdf, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet). Customization: Add more supported MIME types as needed. Find Duplicates 🔍 Type: Supabase Purpose: Checks if the file has already been processed by querying knowledge_base. Configuration: Operation: Select Table: knowledge_base Filter: fileid = {{ $node['Extract File Metadata'].json.fileid }} Customization: Add additional duplicate checks (e.g., by file name). Handle Duplicates 🔄 Type: IF Purpose: Routes the workflow based on whether a duplicate is found. Configuration: Condition: {{ $node['Find Duplicates'].json.length > 0 }} Customization: Add notifications for duplicates if desired. Remove Old Text Data 🗑️ Type: Supabase Purpose: Deletes old text data from documents if the file is a duplicate. Configuration: Operation: Delete Table: documents Filter: metadata->>'fileid' = {{ $node['Extract File Metadata'].json.fileid }} Customization: Add logging before deletion. Remove Old Data 🗑️ Type: Supabase Purpose: Deletes old tabular data from document_rows if the file is a duplicate. Configuration: Operation: Delete Table: document_rows Filter: datasetid = {{ $node['Extract File Metadata'].json.fileid }} Customization: Add logging before deletion. Route by File Type 🔀 Type: Switch Purpose: Routes the workflow based on the file’s MIME type (text-based or tabular). Configuration: Rules: Based on mime_type (e.g., application/pdf for text, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet for tabular). Customization: Add more routes for additional file types. Download File Content 📥 Type: Google Drive Purpose: Downloads the file content for text-based files. Configuration: Credential: Google Drive OAuth2 File ID: {{ $node['Extract File Metadata'].json.file_id }} Customization: Add error handling for download failures. Extract PDF Text 📜 Type: Extract from File (PDF) Purpose: Extracts text from PDF files. Configuration: File Content: {{ $node['Download File Content'].binary.data }} Customization: Adjust extraction settings for better accuracy. Extract DOCX Text 📜 Type: Extract from File (DOCX) Purpose: Extracts text from DOCX files. Configuration: File Content: {{ $node['Download File Content'].binary.data }} Customization: Add support for other text formats (e.g., TXT, RTF). Extract XLSX Data 📊 Type: Extract from File (XLSX) Purpose: Extracts tabular data from XLSX files. Configuration: File ID: {{ $node['Extract File Metadata'].json.file_id }} Customization: Add support for CSV or Google Sheets. Split Text into Chunks ✂️ Type: Text Splitter Purpose: Splits extracted text into manageable chunks for embedding. Configuration: Chunk Size: 1000 Chunk Overlap: 200 Customization: Adjust chunk size and overlap based on document length. Generate Text Embeddings 🌐 Type: OpenAI Purpose: Generates embeddings for text chunks using OpenAI. Configuration: Credential: OpenAI API key Operation: Embedding Model: text-embedding-ada-002 Customization: Switch to a different embedding model if needed. Store Text in Supabase 💾 Type: Supabase Vector Store Purpose: Stores text chunks and embeddings in the documents table. Configuration: Credential: Supabase credentials Operation: Insert Documents Table Name: documents Customization: Add metadata fields to store additional context. Store Tabular Data 💾 Type: Supabase Purpose: Stores tabular data in the document_rows table. Configuration: Operation: Insert Table: document_rows Columns: datasetid, rowdata Customization: Add validation for tabular data structure. Store File Metadata 📋 Type: Supabase Purpose: Stores file metadata in the document_metadata table. Configuration: Operation: Insert Table: document_metadata Columns: fileid, filename, filetype, fileurl Customization: Add more metadata fields as needed. Record in Knowledge Base 📚 Type: Supabase Purpose: Stores basic file info in the knowledge_base table. Configuration: Operation: Insert Table: knowledge_base Columns: fileid, filename, filetype, fileurl, upload_date Customization: Add indexes for faster lookups. Log File Errors ⚠️ Type: Supabase Purpose: Logs errors for unsupported file types. Configuration: Operation: Insert Table: error_log Columns: errortype, errormessage Customization: Add notifications for errors. Log Duplicate Errors ⚠️ Type: Supabase Purpose: Logs errors for duplicate files. Configuration: Operation: Insert Table: error_log Columns: errortype, errormessage Customization: Add notifications for duplicates. Interactive Knowledge Base Chat with Supabase RAG using GPT-4o-mini 📚💬 Introduction 🌟 This n8n workflow creates an interactive chat interface that allows users to query a company knowledge base using Retrieval-Augmented Generation (RAG). It retrieves relevant information from text documents and tabular data stored in Supabase, then generates natural language responses using OpenAI’s GPT-4o-mini model. Designed for teams managing internal knowledge, this workflow enables users to ask questions like “What’s the remote work policy?” or “Show me the latest budget data” and receive accurate, context-aware responses in a conversational format. 🚀 Problem Statement 🎯 Managing a company knowledge base can be a daunting task—employees often struggle to find specific information buried in documents or spreadsheets, leading to wasted time and inefficiencies. Traditional search methods may not understand natural language queries or provide contextually relevant results. This workflow solves these issues by: Offering a chat-based interface for natural language queries, making it easy for users to ask questions in their own words. Leveraging RAG to retrieve relevant text and tabular data from Supabase, ensuring responses are accurate and context-aware. Supporting diverse file types, including text-based files (e.g., PDFs, DOCX) and tabular data (e.g., XLSX, CSV), for comprehensive knowledge access. Maintaining conversation history to provide context during interactions, improving the user experience. Target Audience 👥 This workflow is ideal for: HR Teams: Quickly access company policies, employee handbooks, or benefits documents. Finance Teams: Retrieve budget data, expense reports, or financial summaries from spreadsheets. Knowledge Managers: Build a centralized assistant for internal documentation, streamlining information access. Developers: Extend the workflow with additional tools or integrations for custom use cases. Workflow Description 🔍 This workflow consists of a chat interface powered by n8n’s Chat Trigger node, an AI Agent node for RAG, and several tools to retrieve data from Supabase. Here’s how it works step-by-step: User Initiates a Chat: The user interacts with a chat interface, sending queries like “Summarize our remote work policy” or “Show budget data for Q1 2025.” Query Processing with RAG: The AI Agent processes the query using RAG, retrieving relevant data from Supabase tables and generating a response with OpenAI’s GPT-4o-mini model. Data Retrieval and Response Generation: The workflow uses multiple tools to fetch data: Retrieves text chunks from the documents table using vector search. Fetches tabular data from the document_rows table based on file IDs. Extracts full document text or lists available files as needed. Generates a natural language response combining the retrieved data. Conversation History Management: Stores the conversation history in Supabase to maintain context for follow-up questions. Response Delivery: Formats and sends the response back to the chat interface for the user to view. Nodes Breakdown 🛠️ Start Chat Interface 💬 Type: Chat Trigger Purpose: Provides the interactive chat interface for users to input queries and receive responses. Configuration: Chat Title: Company Knowledge Base Assistant Chat Subtitle: Ask me anything about company documents! Welcome Message: Hello! I’m your Company Knowledge Base Assistant. How can I help you today? Suggestions: What is the company policy on remote work?, Show me the latest budget data., List all policy documents. Output Chat Session ID: true Output User Message: true Customization: Update the title and welcome message to align with your company branding (e.g., HR Knowledge Assistant). Add more suggestions relevant to your use case (e.g., What are the company benefits?). Process Query with RAG 🧠 Type: AI Agent Purpose: Orchestrates the RAG process by retrieving relevant data using tools and generating responses with OpenAI’s GPT-4o-mini. Configuration: Credential: OpenAI API key Model: gpt-4o-mini System Prompt: You are a helpful assistant for a company knowledge base. Use the provided tools to retrieve relevant information from documents and tabular data. If the query involves tabular data, format it clearly in your response. If no relevant data is found, respond with "I couldn’t find any relevant information. Can you provide more details?" Input Field: {{ $node['Start Chat Interface'].json.message }} Customization: Switch to a different model (e.g., gpt-3.5-turbo) to adjust cost or performance. Modify the system prompt to change the tone (e.g., more formal for HR use cases). Retrieve Text Chunks 📄 Type: Supabase Vector Store (Tool) Purpose: Retrieves relevant text chunks from the documents table using vector search. Configuration: Credential: Supabase credentials Operation Mode: Retrieve Documents (As Tool for AI Agent) Table Name: documents Embedding Field: embedding Content Field: content_text Metadata Field: metadata Embedding Model: OpenAI text-embedding-ada-002 Top K: 10 Customization: Adjust Top K to retrieve more or fewer results (e.g., 5 for faster responses). Ensure the match_documents function (see prerequisites) is defined in Supabase. Fetch Tabular Data 📊 Type: Supabase (Tool, Execute Query) Purpose: Retrieves tabular data from the document_rows table based on a file ID. Configuration: Credential: Supabase credentials Operation: Execute Query Query: SELECT rowdata FROM documentrows WHERE dataset_id = $1 LIMIT 10 Tool Description: Run a SQL query - use this to query from the documentrows table once you know the file ID you are querying. datasetid is the fileid and you are always using the rowdata for filtering, which is a jsonb field that has all the keys from the file schema given in the document_metadata table. Customization: Modify the query to filter specific columns or add conditions (e.g., WHERE datasetid = $1 AND rowdata->>'year' = '2025'). Increase the LIMIT for larger datasets. Extract Full Document Text 📜 Type: Supabase (Tool, Execute Query) Purpose: Fetches the full text of a document by concatenating all text chunks for a given file_id. Configuration: Credential: Supabase credentials Operation: Execute Query Query: SELECT stringagg(contenttext, ' ') as documenttext FROM documents WHERE metadata->>'fileid' = $1 GROUP BY metadata->>'file_id' Tool Description: Given file id fetch the text from the documents Customization: Add filters to the query if needed (e.g., limit to specific metadata fields). List Available Files 📋 Type: Supabase (Tool, Select) Purpose: Lists all files in the knowledge base from the document_metadata table. Configuration: Credential: Supabase credentials Operation: Select Schema: public Table: document_metadata Tool Description: Use this tool to fetch all documents including the table schema if the file is csv, excel or xlsx Customization: Add filters to list specific file types (e.g., WHERE file_type = 'application/pdf'). Modify the columns selected to include additional metadata (e.g., file_size). Manage Chat History 💾 Type: Postgres Chat Memory (Tool) Purpose: Stores and retrieves conversation history to maintain context. Configuration: Credential: Supabase credentials (Postgres-compatible) Table Name: n8nchathistory Session ID Field: session_id Session ID Value: {{ $node['Start Chat Interface'].json.sessionId }} Message Field: message Sender Field: sender Timestamp Field: timestamp Context Window Length: 5 Customization: Increase the context window length for longer conversations (e.g., 10 messages). Add indexes on session_id and timestamp in Supabase for better performance. Format and Send Response 📤 Type: Set Purpose: Formats the AI Agent’s response and sends it back to the chat interface. Configuration: Fields: response: {{ $node['Process Query with RAG'].json.output }} Customization: Add additional formatting to the response if needed (e.g., prepend with a timestamp or apply markdown formatting). Setup Instructions 🛠️ Prerequisites 📋 n8n Setup: Ensure you’re using n8n version 1.0 or higher. Enable the AI features in n8n settings. Supabase: Create a Supabase project and set up the following tables: documents: id (uuid), content_text (text), embedding (vector(1536)), metadata (jsonb) documentrows: id (uuid), datasetid (varchar), row_data (jsonb) documentmetadata: fileid (varchar), filename (varchar), filetype (varchar), file_url (text) knowledgebase: id (serial), fileid (varchar), filename (varchar), filetype (varchar), fileurl (text), uploaddate (timestamp) n8nchathistory: id (serial), session_id (varchar), message (text), sender (varchar), timestamp (timestamp) Add the match_documents function to Supabase to enable vector search: sql CREATE OR REPLACE FUNCTION match_documents ( query_embedding vector(1536), match_count int DEFAULT 5, filter jsonb DEFAULT '{}' ) RETURNS TABLE ( id uuid, content_text text, metadata jsonb, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT documents.id, documents.content_text, documents.metadata, 1 - (documents.embedding <=> query_embedding) as similarity FROM documents WHERE documents.metadata @> filter ORDER BY similarity DESC LIMIT match_count; END; $$;
Create social media videos with Sora 2 AI for marketing & content creation
Overview This workflow utilizes the Defapi API with Sora 2 AI model to generate stunning viral videos with creative AI-generated motion, effects, and storytelling. Simply provide a creative prompt describing your desired video scene, and optionally upload an image as a reference. The AI generates professional-quality video content perfect for tiktok, youtube, marketing campaigns, and creative projects. Input: Creative prompt (required) + optional image Output: AI-generated viral video ready for social media and content marketing Users can interact through a simple form, providing a text prompt describing the desired video scene and optionally uploading an image for context. The system automatically submits the request to the Defapi Sora 2 API, monitors the generation status in real time, and retrieves the final video output. This solution is ideal for content creators, social media marketers, video producers, and businesses who want to quickly generate engaging video content with minimal setup. Prerequisites A Defapi account and API key: Sign up at Defapi.org to obtain your API key for Sora 2 access. An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. Basic knowledge of AI prompts for video generation to achieve optimal results. Example prompt: A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants. For 15-second HD videos, prefix your prompt with (15s,hd). (Optional) An image to use as a reference or starting point for video generation. Image Restrictions: Avoid uploading images with real people or highly realistic human faces, as they will be rejected during content review. Important Notes: The API requires proper authentication via Bearer token for all requests. Content undergoes multi-stage moderation. Avoid violence, adult content, copyrighted material, and living celebrities in both prompts and images. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key with Sora 2 access. Store it securely—do not share it publicly. Configure Credentials: In n8n, create HTTP Bearer Auth credentials named "Defapi account" with your API key. Configure the Form: In the "Upload Image" form trigger node, ensure the following fields are set up: Prompt (text field, required) - Describe the video scene you want to generate Image (file upload, optional) - Optionally upload .jpg, .png, or .webp image files as reference Test the Workflow: Click "Execute Workflow" in n8n to activate the form trigger. Access the generated form URL and enter your creative video prompt. Optionally upload an image for additional context. The workflow will process any uploaded image through the "Convert to JSON" node, converting it to base64 format. The request is sent to the Sora 2 API endpoint at Defapi.org. The system will wait 10 seconds and then poll the API status until video generation is complete. Handle Outputs: The final "Format and Display Results" node formats and displays the generated video URL for download or embedding. Workflow Structure The workflow consists of the following nodes: Upload Image (Form Trigger) - Collects user input: creative prompt (required) and optional image file Convert to JSON (Code Node) - Converts any uploaded image to base64 data URI and formats prompt Send Sora 2 Generation Request to Defapi.org API (HTTP Request) - Submits video generation request to Sora 2 API Wait for Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API task query endpoint for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Results (Set Node) - Extracts and formats final video URL output Technical Details API Endpoint: https://api.defapi.org/api/sora2/gen (POST request) Model Used: Sora 2 AI video generation model Video Capabilities: Supports 15-second videos and high-definition (HD) output Status Check Endpoint: https://api.defapi.org/api/task/query (GET request) Wait Time: 10 seconds between status checks Image Processing: If an image is uploaded, it is converted to base64 data URI format (data:image/[type];base64,[data]) for API submission Authentication: Bearer token authentication using the configured Defapi account credentials Request Body Format: json { "prompt": "Your video description here", "images": ["data:image/jpeg;base64,..."] } Note: The images array can contain an image or be empty if no image is provided Response Format: The API returns a task_id which is used to poll for completion status. Final result contains data.result.video with the video URL. Accepted Image Formats: .jpg, .png, .webp Specialized For: Viral video content, social media videos, creative video marketing Customization Tips Enhance Prompts: Include specifics like: Scene description and action sequences Character behaviors and emotions Camera movements and angles (e.g., slow-motion, dramatic zoom) Audio/music style (e.g., dramatic, upbeat, cinematic) Visual effects and atmosphere Timing and pacing details Enable 15s and HD Output: To generate 15-second high-definition videos, start your prompt with (15s,hd). For example: (15s,hd) A pack of dogs driving tiny cars in a high-speed chase through a city... Content Moderation The API implements a three-stage content review process: Image Review: Rejects images with real people or highly realistic human faces Prompt Filtering: Checks for violence, adult content, copyrighted material, and living celebrities Output Review: Final check after generation (often causes failures at 90%+ completion) Best Practices: Avoid real human photos; use illustrations or cartoons instead Keep prompts generic; avoid brand names and celebrity names You can reference verified Sora accounts (e.g., "let @sama dance") If generation fails at 90%+, simplify your prompt and try again Example Prompts "A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants." "(15s,hd) Animated fantasy landscape with floating islands, waterfalls cascading into clouds, magical creatures flying, golden sunset lighting, epic orchestral music." "(15s,hd) Product showcase with 360-degree rotation, dramatic lighting changes, particle effects, modern electronic background music." Use Cases Social Media Content: Generate eye-catching videos for Instagram Reels, TikTok, and YouTube Shorts Marketing Campaigns: Create unique promotional videos from product images Creative Projects: Transform static images into dynamic storytelling videos Content Marketing: Produce engaging video content without expensive production costs Viral Content Creation: Generate shareable, attention-grabbing videos for maximum engagement
Automate customer support with WhatsApp AI assistant for Whatsapp groups
How it works Your WhatsApp AI Assistant automatically handles customer inquiries by linking your Google Docs knowledge base to incoming WhatsApp messages. The system instantly processes customer questions, references your business documentation, and delivers AI-powered responses through OpenAI or Gemini - all without you lifting a finger. Works seamlessly in individual chats and WhatsApp groups where the assistant can respond on your behalf. Set up steps Time to complete: 15-30 minutes Step 1: Create your WhapAround account and connect your WhatsApp number (5 minutes) Step 2: Prepare your Google Doc with business information and add the document ID to the system (5 minutes) Step 3: Configure the WhatsApp webhook and map message fields (10 minutes) Step 4: Connect your OpenAI or Gemini API key (3 minutes) Step 5: Send a test message to verify everything works (2 minutes) Optional: Set up PostgreSQL database for conversation memory and configure custom branding/escalation rules (additional 15-20 minutes) Detailed technical configurations, webhook URLs, and API parameter settings are provided within each workflow step to guide you through the exact setup process.
Better Oauth2.0 workflow for Pipedrive CRM with Supabase
This workflow provides an OAuth 2.0 auth token refresh process for better control. Developers can utilize it as an alternative to n8n's built-in OAuth flow to achieve improved control and visibility. In this template, I've used Pipedrive API, but users can apply it with any app that requires the authorization_code for token access. This resolves the issue of manually refreshing the OAuth 2.0 token when it expires, or when n8n's native OAuth stops working. What you need to replicate this Your database with a pre-existing table for storing authentication tokens and associated information. I'm using Supabase in this example, but you can also employ a self-hosted MySQL. Here's a quick video on setting up the Supabase table. Create a client app for your chosen application that you want to access via the API. After duplicating the template: a. Add credentials to your database and connect the DB nodes in all 3 workflows. Enable/Publish the first workflow, "1. Generate and Save Pipedrive tokens to Database." Open your client app and follow the Pipedrive instructions to authenticate. Click on Install and test. This will save your initial refresh token and access token to the database. Please watch the YouTube video for a detailed demonstration of the workflow: How it operates Workflow 1. Create a workflow to capture the authorizationcode, generate the accesstoken, and refresh the token, and then save the token to the database. Workflow 2. Develop your primary workflow to fetch or post data to/from your application. Observe the logic to include an if condition when an error occurs with an invalid token. This triggers the third workflow to refresh the token. Workflow 3. This workflow will handle the token refresh. Remember to send the unique ID to the webhook to fetch the necessary tokens from your table. Detailed demonstration of the workflow: https://youtu.be/6nXi_yverss
Export new deals from HubSpot to Slack and Airtable
This workflow is triggered when a new deal is created in HubSpot. Then, it processes the deal based on its value and stage. The first branching follows three cases: If the deal is closed and won, a message is sent in a Slack channel, so that the whole team can celebrate the success. If a presentation has been scheduled for the deal, then a Google Slides presentation template is created. If the deal is closed and lost, the deal’s details are added to an Airtable table. From here, you can analyze the data to get insights into what and why certain deals don’t get closed. The second branching follows two cases: If the deal is for a new business and has a value above 500, a high-priority ticket assigned to an experienced team member is created in HubSpot If the deal is for an existing business and has a value below 500, a low-priority ticket is created.
Build comprehensive literature reviews with GPT-4 and multi-database search
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Comprehensive Literature Review Automation Automate your literature review process by searching across multiple academic databases, parsing papers, and organizing findings into a structured review document. Features: Search multiple academic databases simultaneously (PubMed, ArXiv, Google Scholar, etc.) Parse and analyze top papers automatically Generate citation-ready summaries Export to various formats (Markdown, Word, PDF) Workflow Steps: Input: Research topic and parameters PDF Vector Search: Query multiple academic databases Filter & Rank: Select top relevant papers Parse Papers: Extract content from PDFs Synthesize: Create literature review sections Export: Generate final document Use Cases: PhD students conducting systematic reviews Researchers exploring new fields Grant writers needing background sections
Automate daily YouTrack task summaries to Discord by assignee
Daily YouTrack In-Progress Tasks Summary to Discord by Assignee Keep your team in sync with a daily summary of tasks currently In Progress in YouTrack — automatically posted to your Discord channel. This workflow queries issues, filters them by status, groups them by assignee and priority, and sends a formatted message to Discord. It's perfect for teams that need a lightweight, automated stand-up report. > 📝 This workflow uses Discord as an example. You can easily replace the messaging integration with Slack, Mattermost, MS Teams, or any other platform that supports incoming webhooks. Use Case Remote development teams using YouTrack + Discord Replacing daily stand-up meetings with async updates Project managers needing quick visibility into active tasks Features Scheduled daily execution (default: weekdays at 09:00) Status filter: only issues marked as In Progress Grouping by assignee and priority Custom mapping for user mentions (YouTrack → Discord) Clean Markdown output for Discord, with direct task links Setup Instructions YouTrack Configuration Get a permanent token: Go to your YouTrack profile → Account Security → Authentication Create a new permanent token with "Read Issue" permissions Copy the token value Set the base API URL: Format: https://yourdomain.youtrack.cloud/api/issues Replace yourdomain with your actual YouTrack instance Identify custom field IDs: Method 1: Go to YouTrack → Administration → Custom Fields → find your "Status" field and note its ID Method 2: Use API call GET /api/admin/customFieldSettings/customFields to list all field IDs Method 3: Inspect a task's API response and look for field IDs in the customFields array Example Status field ID: 105-0 or 142-1 Discord Configuration Create a webhook URL in your Discord server: Server Settings → Integrations → Webhooks → New Webhook Choose target channel and copy the webhook URL Extract webhook ID from URL (numbers after /webhooks/) Environment Variables & Placeholders | Placeholder | Description | |-------------|-------------| | {{API_URL}} | Your YouTrack API base URL | | {{TOKEN}} | YouTrack permanent token | | {{FIELD_ID}} | ID of the "Status" custom field | | {{QUERY_FIELDS}} | Fields to fetch (e.g., summary, id) | | {{PROJECT_LINK}} | Link to your YouTrack project | | {{USER_X}} | YouTrack usernames | | {{DISCORDIDX}} | Discord mentions or usernames | | {{NAME_X}} | Display names | | {{WEBHOOK_ID}} | Discord webhook ID | | {{DISCORD_CHANNEL}} | Discord channel name | | {{CREDENTIAL_ID}} | Your credential ID in n8n | Testing the Workflow Test YouTrack connection: Execute the "HTTP Request YT" node individually Verify that issues are returned from your YouTrack instance Check if the Status field ID is correctly filtering tasks Verify filtering: Run the "Filter fields" node Confirm only "In Progress" tasks pass through Check message formatting: Execute the "Discord message" node Review the generated message content and formatting Test Discord delivery: Run the complete workflow manually Verify the message appears in your Discord channel Schedule verification: Enable the workflow Test weekend skip functionality by temporarily changing dates Customization Tips Language: All labels/messages are in English — customize if needed User mapping: Adjust assignee → Discord mention logic in the message builder Priorities: Update the priorityMap to reflect your own naming structure Schedule: Modify the trigger time in the Schedule Trigger node Alternative platforms: Swap out the Discord webhook for another messaging service if preferred
Automate social media headlines with Bright Data & n8n
Description This workflow automatically finds trending headlines and content from various sources and posts them to your social media accounts. It helps maintain an active social media presence without the daily manual effort of content curation. Overview This workflow automatically scrapes trending headlines and content from various sources and posts them to your social media accounts. It uses Bright Data to access content and n8n to schedule and post to platforms like Twitter, LinkedIn, or Facebook. Tools Used n8n: The automation platform that orchestrates the workflow. Bright Data: For scraping trending content from news sites, blogs, or other sources without getting blocked. Social Media APIs: To post content to your accounts. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Connect Social Media: Authenticate your social media accounts. Customize: Set your content preferences, posting schedule, and hashtag strategy. Use Cases Social Media Managers: Automate content curation and posting. Content Creators: Share trending topics in your niche. Businesses: Maintain an active social media presence with minimal effort. --- Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) n8n automation socialmedia brightdata contentcuration scheduling socialmediaautomation contentmarketing socialmediamanagement autoposting trendingcontent n8nworkflow workflow nocode socialmediatools digitalmarketing contentcalendar socialmediapresence headlinecuration trendalerts socialmediaschedule contentautomation socialmediamarketing contentdistribution automatedposting socialmediastrategy
Generate AI descriptions for new Google Sheets entries with GPT-4.1-mini
This n8n workflow template automatically monitors your Google Sheets for new entries and uses AI to generate detailed descriptions for each topic. Perfect for content creators, researchers, project managers, or anyone who needs automatic content generation based on simple topic inputs. What This Workflow Does This automated workflow: Monitors a Google Sheet for new rows added to the "data" tab Takes the topic from each new row Uses OpenAI GPT to generate a detailed description of that topic Updates the same row with the AI-generated description Logs all activity in a separate "actions" tab for tracking The workflow runs every minute, checking for new entries and processing them automatically. Tools & Services Used N8N - Workflow automation platform OpenAI API - AI-powered description generation (GPT-4.1-mini) Google Sheets - Data input, storage, and activity logging Google Sheets Trigger - Real-time monitoring for new rows Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI description generation Google Account - For Google Sheets integration Google Sheets API Access - For both reading and writing to sheets Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Set Up Your Google Sheets Option 1: Use Our Pre-Made Template (Recommended) Copy our template: AI Description Generator Template Click "File" → "Make a copy" to create your own version Rename it as desired (e.g., "My AI Content Generator") Note your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Set up the main "data" tab: Rename "Sheet1" to "data" Set up column headers in row 1: A1: topic B1: description Create an "actions" tab: Add a new sheet and name it "actions" Set up column headers: A1: Update Copy your sheet's URL Step 3: Configure Google API Access Enable Google Sheets API Go to Google Cloud Console Create a new project or select existing one Enable "Google Sheets API" Enable "Google Drive API" Create Service Account (for N8N) In Google Cloud Console, go to "IAM & Admin" → "Service Accounts" Create a new service account Download the JSON credentials file Share your Google Sheet with the service account email address Step 4: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on the "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration For the Trigger Node: Click on "Row added - Google Sheet" node Set up Google Sheets Trigger OAuth2 credentials Select your spreadsheet from the dropdown Choose the "data" sheet Set polling to "Every Minute" (already configured) For the Update Node: Click on "Update row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "data" sheet Verify column mapping (topic → topic, description → AI output) For the Actions Log Node: Click on "Append row in sheet" node Use the same Google Sheets credentials Select your spreadsheet and "actions" sheet Step 5: Customize the AI Description Generator The workflow uses a simple prompt that can be customized: Click on the "Description Writer" node Modify the system message to change the AI behavior: write a description of the topic. output like this. { "description": "description" } Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email: robert@ynteractive.com Website: www.ynteractive.com LinkedIn: linkedin.com/in/robert-breen-29429625/ Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
Automate property maintenance requests with GPT-4o-mini, Jotform and Gmail
Tired of juggling maintenance calls, lost requests, and slow vendor responses? This workflow streamlines the entire property maintenance process — from tenant request to vendor dispatch — powered by AI categorization and automated communication. Cut resolution time from 5–7 days to under 24 hours and boost tenant satisfaction by 85% with zero manual follow-up. What This Workflow Does Transforms chaotic maintenance management into seamless automation: 📝 Captures Requests – Tenants submit issues via Jotform with unit number, issue description, urgency, and photos. 🤖 AI Categorization – OpenAI (GPT-4o-mini) analyzes and classifies issues (plumbing, HVAC, electrical, etc.). ⚙️ Smart Prioritization – Flags emergencies (leak, electrical failure) and assigns priority. 📬 Vendor Routing – Routes issue to the correct contractor or vendor based on AI category. 📧 Automated Communication – Sends acknowledgment to tenant and work order to vendor via Gmail. 📊 Audit Trail Logging – Optionally logs requests in Google Sheets for performance tracking and reporting. Key Features 🧠 AI-Powered Categorization – Intelligent issue type and priority detection. 🚨 Emergency Routing – Automatically escalates critical issues. 📤 Automated Work Orders – Sends detailed emails with property and tenant info. 📈 Google Sheets Logging – Transparent audit trail for compliance and analytics. 🔄 End-to-End Automation – From form submission to vendor dispatch in seconds. 💬 Sticky Notes Included – Every section annotated for easy understanding. Perfect For Property management companies Real estate agencies and facility teams Smart building operators Co-living and rental startups Maintenance coordinators managing 50–200+ requests monthly What You’ll Need Required Integrations: Jotform – Maintenance request form Create your form for free on Jotform using this link OpenAI (GPT-4o-mini) – Categorization and prioritization Gmail – Automated email notifications (Optional) Google Sheets – Logging and performance tracking Quick Start Import Template – Copy JSON into n8n and import. Create Jotform – Include fields: Tenant name, email, unit number, issue description, urgency, photo upload. Add Credentials – Configure Jotform, Gmail, and OpenAI credentials. Set Vendor Emails – Update “Send to Contractor” Gmail node with vendor email IDs. Test Workflow – Submit sample maintenance requests for AI categorization and routing. Activate Workflow – Go live and let your tenants submit maintenance issues. Expected Results ⏱️ 24-hour average resolution time (vs 5–7 days). 😀 85% higher tenant satisfaction with instant communication. 📉 Zero lost requests – every issue logged automatically. 🧠 AI-driven prioritization ensures critical issues handled first. 🕒 10+ hours saved weekly for property managers. Pro Tips 🧾 Add Google Sheets logging for a complete audit trail. 🔔 Include keywords like “leak,” “no power,” or “urgent” in AI prompts for faster emergency detection. 🧰 Expand vendor list dynamically using a Google Sheet lookup. 🧑🔧 Add follow-up automation to verify task completion from vendors. 📊 Create dashboards for monthly maintenance insights. Learning Resources This workflow demonstrates: AI categorization using OpenAI’s Chat Model (GPT-4o-mini) Multi-path routing logic (emergency vs. normal) Automated communication via Gmail Optional data logging in Google Sheets Annotated workflow with Sticky Notes for learning clarity
SERP competitor research with Scrape.do API & Google Sheets
🔍 Extract Competitor SERP Rankings from Google Search to Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts competitor data from Google search results for specific keywords and target countries using Scrape.do SERP API, and saves structured results into Google Sheets for SEO, competitive analysis, and market research. --- 📋 Overview This workflow provides a lightweight competitor analysis solution that identifies ranking websites for chosen keywords across different countries. Ideal for SEO specialists, content strategists, and digital marketers who need structured SERP insights without manual effort. Who is this for? SEO professionals tracking keyword competitors Digital marketers conducting market analysis Content strategists planning based on SERP insights Business analysts researching competitor positioning Agencies automating SEO reporting What problem does this workflow solve? Eliminates manual SERP scraping Processes multiple keywords across countries Extracts structured data (position, title, URL, description) Automates saving results into Google Sheets Ensures repeatable & consistent methodology --- ⚙️ What this workflow does Manual Trigger → Starts the workflow manually Get Keywords from Sheet → Reads keywords + target countries from a Google Sheet URL Encode Keywords → Converts keywords into URL-safe format Process Keywords in Batches → Handles multiple keywords sequentially to avoid rate limits Fetch Google Search Results → Calls Scrape.do SERP API to retrieve raw HTML of Google SERPs Extract Competitor Data from HTML → Parses HTML into structured competitor data (top 10 results) Append Results to Sheet → Writes structured SERP results into a Google Sheet --- 📊 Output Data Points | Field | Description | Example | |--------------------|------------------------------------------|-------------------------------------------| | Keyword | Original search term | digital marketing services | | Target Country | 2-letter ISO code of target region | US | | position | Ranking position in search results | 1 | | websiteTitle | Page title from SERP result | Digital Marketing Software & Tools | | websiteUrl | Extracted website URL | https://www.hubspot.com/marketing | | websiteDescription | Snippet/description from search results | Grow your business with HubSpot’s tools… | --- ⚙️ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with SERP API token Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Keywords" | Column | Type | Description | Example | |----------|------|-------------|---------| | Keyword | Text | Search query | digital marketing | | Target Country | Text | 2-letter ISO code | US | Output Tab: "Results" | Column | Type | Description | Example | |--------------------|-------|-------------|---------| | Keyword | Text | Original search term | digital marketing | | position | Number| SERP ranking | 1 | | websiteTitle | Text | Title of the page | Digital Marketing Software & Tools | | websiteUrl | URL | Website/page URL | https://www.hubspot.com/marketing | | websiteDescription | Text | Snippet text | Grow your business with HubSpot’s tools | --- 🛠 Step-by-Step Setup Import Workflow: Copy the JSON → n8n → Workflows → + Add → Import from JSON Configure Scrape.do API: Endpoint: https://api.scrape.do/ Parameter: token=YOURSCRAPEDOTOKEN Add render=true for full HTML rendering Configure Google Sheets: Create a sheet with two tabs: Keywords (input), Results (output) Set up Google Sheets OAuth2 credentials in n8n Replace placeholders: YOURGOOGLESHEETID and YOURGOOGLESHEETSCREDENTIAL_ID Run & Test: Add test data in Keywords tab Execute workflow → Check results in Results tab --- 🧰 How to Customize Add more fields: Extend HTML parsing logic in the “Extract Competitor Data” node to capture extra data (e.g., domain, sitelinks). Filtering: Exclude domains or results with custom rules. Batch Size: Adjust “Process Keywords in Batches” for speed vs. rate-limits. Rate Limiting: Insert a Wait node (e.g., 10–30 seconds) if API rate limits apply. Multi-Sheet Output: Save per-country or per-keyword results into separate tabs. --- 📊 Use Cases SEO Competitor Analysis: Identify top-ranking sites for target keywords Market Research: See how SERPs differ by region Content Strategy: Analyze titles & descriptions of competitor pages Agency Reporting: Automate competitor SERP snapshots for clients --- 📈 Performance & Limits Single Keyword: ~10–20 seconds (depends on Scrape.do response) Batch of 10: 3–5 minutes typical Large Sets (50+): 20–40 minutes depending on API credits & batching API Calls: 1 Scrape.do request per keyword Reliability: 95%+ extraction success, 98%+ data accuracy --- 🧩 Troubleshooting API error → Check YOURSCRAPEDOTOKEN and API credits No keywords loaded → Verify Google Sheet ID & tab name = Keywords Permission denied → Re-authenticate Google Sheets OAuth2 in n8n Empty results → Check parsing logic and verify search term validity Workflow stops early → Ensure batching loop (SplitInBatches) is properly connected --- 🤝 Support & Community n8n Forum: https://community.n8n.io n8n Docs: https://docs.n8n.io Scrape.do Dashboard: https://dashboard.scrape.do --- 🎯 Final Notes This workflow provides a repeatable foundation for extracting competitor SERP rankings with Scrape.do and saving them to Google Sheets. You can extend it with filtering, richer parsing, or integration with reporting dashboards to create a fully automated SEO intelligence pipeline.
Batch resume analysis with Google Gemini AI and Google Sheets
How it works You have several resumes you need to review manually? well this workflows allows you to upload upto 20 bunches pdf at once. AI does the heavy lifting, saving time, reducing repetive tasks and achieving high accuracy. The job description and qualificattion goes under the agent System message. Setup steps. It will take you roughly 20minutes to finish setting up this workflow. n8n Form Allow multiple file submission JavaScript Code allow mapping of each file individually System message adjust the system message to fit the job description and qualification. Google Sheet make a copy