iamvaar
I’m a Full Stack Developer, skilled at planning system architecture and currently building automation systems to save time. I clearly know what I’m doing and exactly which problem I’m solving. Wanna work with me? https://cal.com/abhi.vaar/15min
Categories
Templates by iamvaar
Automated law firm lead management & scheduling with AI, Jotform & Calendar
Youtube Explanation: [https://youtu.be/KgmNiV7SwkU](https://youtu.be/KgmNiV7SwkU ) This n8n workflow is designed to automate the initial intake and scheduling for a law firm. It's split into two main parts: New Inquiry Handling: Kicks off when a potential client fills out a JotForm, saves their data, and sends them an initial welcome message on WhatsApp. Appointment Scheduling: Activates when the client replies on WhatsApp, allowing an AI agent to chat with them to schedule a consultation. Here’s a detailed breakdown of the prerequisites and each node. Prerequisites Before building this workflow, you'll need accounts and some setup for each of the following services: JotForm JotForm Account: You need an active JotForm account. A Published Form: Create a form with the exact fields used in the workflow: Full Name, Email Address, Phone Number, I am a..., Legal Service of Interest, Brief Message, and How Did You Hear About Us?. API Credentials: Generate API keys from your JotForm account settings to connect it with n8n. Google Google Account: To use Google Sheets and Google Calendar. Google Sheet: Create a new sheet named "Law Client Enquiries". The first row must have these exact headers: Full Name, Email Address, Phone Number, client type, Legal Service of Interest, Brief Message, How Did You Hear About Us?. Google Calendar: An active calendar to manage appointments. Google Cloud Project: Service Account Credentials (for Sheets): In the Google Cloud Console, create a service account, generate JSON key credentials, and enable the Google Sheets API. You must then share your Google Sheet with the service account's email address (e.g., automation-bot@your-project.iam.gserviceaccount.com). OAuth Credentials (for Calendar): Create OAuth 2.0 Client ID credentials to allow n8n to access your calendar on your behalf. You'll need to enable the Google Calendar API. Gemini API Key: Enable the Vertex AI API in your Google Cloud project and generate an API key to use the Google Gemini models. WhatsApp Meta Business Account: Required to use the WhatsApp Business Platform. WhatsApp Business Platform Account: You need to set up a business account and connect a phone number to it. This is different from the regular WhatsApp or WhatsApp Business app. API Credentials: Get the necessary access tokens and IDs from your Meta for Developers dashboard to connect your business number to n8n. PostgreSQL Database A running PostgreSQL instance: This can be hosted anywhere (e.g., AWS, DigitalOcean, Supabase). The AI agent needs it to store and retrieve conversation history. Database Credentials: You'll need the host, port, user, password, and database name to connect n8n to it. Node-by-Node Explanation The workflow is divided into two distinct logical flows. Flow 1: New Client Intake from JotForm This part triggers when a new client submits your form. JotForm Trigger What it does: This is the starting point. It automatically runs the workflow whenever a new submission is received for the specified JotForm (Form ID: 252801824783057). Prerequisites: A JotForm account and a created form. Append or update row in sheet (Google Sheets) What it does: It takes the data from the JotForm submission and adds it to your "Law Client Enquiries" Google Sheet. How it works: It uses the appendOrUpdate operation. It tries to find a row where the "Email Address" column matches the email from the form. If it finds a match, it updates that row; otherwise, it appends a new row at the bottom. Prerequisites: A Google Sheet with the correct headers, shared with your service account. AI Agent What it does: This node crafts the initial welcome message to be sent to the client. How it works: It uses a detailed prompt that defines a persona ("Alex," a legal intake assistant) and instructs the AI to generate a professional WhatsApp message. It dynamically inserts the client's name and service of interest from the Google Sheet data into the prompt. Connected Node: It's powered by the Google Gemini Chat Model. Send message (WhatsApp) What it does: It sends the message generated by the AI Agent to the client. How it works: It takes the client's phone number from the data (Phone Number column) and the AI-generated text (output from the AI Agent node) to send the message via the WhatsApp Business API. Prerequisites: A configured WhatsApp Business Platform account. --- Flow 2: AI-Powered Scheduling via WhatsApp This part triggers when the client replies to the initial message. WhatsApp Trigger What it does: This node listens for incoming messages on your business's WhatsApp number. When a client replies, it starts this part of the workflow. Prerequisites: A configured WhatsApp Business Platform account. If node What it does: It acts as a simple filter. It checks if the incoming message text is empty. If it is (e.g., a status update), the workflow stops. If it contains text, it proceeds to the AI agent. AI Agent1 What it does: This is the main conversational brain for scheduling. It handles the back-and-forth chat with the client. How it works: Its prompt is highly detailed, instructing it to act as "Alex" and follow a strict procedure for scheduling. It has access to several "tools" to perform actions. Connected Nodes: Google Gemini Chat Model1: The language model that does the thinking. Postgres Chat Memory: Remembers the conversation history with a specific user (keyed by their WhatsApp ID), so the user doesn't have to repeat themselves. Tools: Know about the user enquiry, GET MANY EVENTS..., and Create an event. AI Agent Tools (What the AI can do) Know about the user enquiry (Google Sheets Tool): When the AI needs to know who it's talking to, it uses this tool. It takes the user's phone number and looks up their original enquiry details in the "Law Client Enquiries" sheet. GET MANY EVENTS... (Google Calendar Tool): When a client suggests a date, the AI uses this tool to check your Google Calendar for any existing events on that day to see if you're free. Create an event (Google Calendar Tool): Once a time is agreed upon, the AI uses this tool to create the event in your Google Calendar, adding the client as an attendee. Send message1 (WhatsApp) What it does: Sends the AI's response back to the client. This could be a confirmation that the meeting is booked, a question asking for their email, or a suggestion for a different time if the requested slot is busy. How it works: It sends the output text from AI Agent1 to the client's WhatsApp ID, continuing the conversation.
Personalized hotel reward emails for high-spenders with Salesforce, Gemini AI & Brevo
This n8n workflow automatically detects high‑spending hotel guests after checkout and emails them a personalized, one‑time reward offer. --- 🔧 What it does Watches Salesforce Guestc custom object for checkout updates. Pulls guest spend data on optional paid amenities: Room Service Minibar Laundry Late Checkout Extra Bed Airport Transfer Calculates total spend to identify VIP guests (≥ $50). Uses AI to: Spot unused services. Randomly pick one unused service. Generate a realistic, short promo like: "Free late checkout on your next stay" Parses AI output into JSON. Sends a polished HTML email to the guest with their personalized offer. --- 📦 Key nodes Salesforce Trigger → monitors new checkouts. Salesforce → fetches detailed spend data. Function → sums up total amenity spend. IF → filters for VIP guests. LangChain LLM + Google Vertex AI → drafts the offer text. Structured Output Parser → cleans AI output. Brevo → delivers branded email. --- 📊 Example output > Subject: John, We Have Something Special for Your Next Stay > Offer in email: Enjoy a complimentary minibar selection on your next stay. --- ✨ Why it matters Rewarding guests who already spend boosts loyalty and repeat bookings — without generic discounts. The offer feels personal, relevant, and exclusive.
Build a knowledge base chatbot with Jotform, RAG Supabase, Together AI & Gemini
Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: sql CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: sql DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same* embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only* on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.
Automated inventory management with Airtable PO creation & supplier emails
In-depth description of this automation: This is a fully automated daily supply chain and procurement workflow that keeps product stock levels healthy and suppliers updated, by automatically generating and emailing purchase orders (POs) and syncing PO statuses in Airtable. --- 📅 Daily triggers Two Schedule Trigger nodes run: One runs at midnight (00:00) to manage low stock and new purchase order creation. Another runs at 1:00 AM to process existing pending POs and email suppliers. --- 🚦 Step-by-step breakdown 1️⃣ Get Products with low stock Searches the “Products Table” in Airtable for items where {stocklevel} <= {reorderthreshold}. Detects products that need restocking. 2️⃣ Get supplier details Fetches supplier data for each low-stock product using its supplier_id. 3️⃣ Calculate Dynamic Reorder Quantity JS code calculates an optimal reorder quantity: Uses averagedailysales × (leadtime × 1.5) × safetymargin (1.2) Adds extra buffer so the new order covers both immediate demand and next cycle. 4️⃣ Search existing POs Looks in the “Purchase Orders” table for active POs (status Pending or Sent) matching each product. Prevents duplicate orders. 5️⃣ Remove duplicate product orders JS node compares current low-stock products with existing POs. Filters out products already covered, so new POs are only created for truly uncovered products. 6️⃣ Create new purchase orders For filtered products, creates new PO records in Airtable with: product_name product_id calculated reorder_qty supplier info and email initial status Pending --- 📧 Process existing pending purchase orders and email suppliers 7️⃣ Get Purchase Orders which are pending Searches Airtable for all POs with status Pending. 8️⃣ Group products with suppliers JS code groups these POs by supplier_id. Builds a summary (total products, total quantity) and an HTML email with a styled table of items. 9️⃣ Send PO emails to suppliers Uses Brevo (SendInBlue) to send emails. Subject and content include supplier-specific order details. 🔄 Update PO statuses to Sent Extracts Airtable record IDs of the sent POs. Updates those POs in Airtable, changing status from Pending → Sent. --- 📌 Summary ✅ Runs every day ✅ Dynamically calculates reorder needs ✅ Avoids duplicate purchase orders ✅ Automatically creates purchase orders in Airtable ✅ Groups & emails daily PO summaries to suppliers ✅ Updates PO status after sending email --- ⚙ Tables involved Products Table: stores products, stock levels, reorder thresholds, average daily sales, supplier references. Suppliers Table: stores supplier emails and metadata. Purchase Orders Table: tracks product orders with supplier IDs, status, quantities, etc. --- This workflow makes daily procurement fully automated: detects risk of stockouts, creates POs smartly, keeps suppliers in sync by email, and updates order statuses in one closed loop — perfect for any small or mid-sized business using Airtable + N8N.
RAG-powered AI voice customer support agent (Supabase + Gemini + ElevenLabs)
Execution video: Youtube Link I built an AI voice-triggered RAG assistant where ElevenLabs’ conversational model acts as the front end and n8n handles the brain....here’s the real breakdown of what’s happening in that workflow: Webhook (/inf) Gets hit by ElevenLabs once the user finishes talking. Payload includes user_question. Embed User Message (Together API - BAAI/bge-large-en-v1.5) Turns the spoken question into a dense vector embedding. This embedding is the query representation for semantic search. Search Embeddings (Supabase RPC) Calls matchembeddings1 to find the top 5 most relevant context chunks from your stored knowledge base. Aggregate Merges all retrieved chunk values into one block of text so the LLM gets full context at once. Basic LLM Chain (LangChain node) Prompt forces the model to only answer from the retrieved context and to sound human-like without saying “based on the context”.... Uses Google Vertex Gemini 2.5 Flash as the actual model. Respond to Webhook Sends the generated answer back instantly to the webhook call, so ElevenLabs can speak it back. You essentially have: Voice → Text → Embedding → Vector Search → Context Injection → LLM → Response → Voice
Plan travel itineraries with Gemini AI, live Amadeus flights, and Airbnb stays
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Here is the Full Node-by-Node Breakdown of the workflow Workflow Execution Video: https://youtu.be/qkZ6UaO7aCE --- Webhook (Webhook) Purpose: Accepts incoming user queries via HTTP GET with the text parameter. Example input: 4 people from Germany to Bangkok @14th August 2025 --- AI Agent (AI Agent) Type: LangChain Agent Model: Google Gemini 2.5 Flash via Vertex AI Prompt logic: Extracts structured travel info (origin city, destination, date, number of people) Determines 3-letter IATA codes Uses MCP’s Airbnb Tool to scrape listings starting from that date Returns: A markdown + bullet-format response with: Structured trip info List of Airbnb listings with titles, price, rating, and link --- MCP Client List Tool (MCP Client List Tool) Purpose: Fetches a list of tools registered with MCP (Multi Channel Parser) client for the AI agent to select from Used by: AI Agent as part of listTools() phase --- MCP Execute Tool (MCP Execute Tool) Purpose: Executes the selected MCP tool (Airbnb scraper) Tool input: Dynamic — passed by AI Agent using $fromAI('Tool_Parameters') --- Google Vertex Chat Model (Google Vertex Chat Model) Purpose: Acts as the LLM behind the AI Agent Model: Gemini 2.5 Flash from Vertex AI Used for: Language understanding, extraction, decision-making --- Grabbing Clean Data (Code Node) Purpose: Parses AI output to extract: Structured trip data Airbnb listings (with title, rating, price, link) Handles: Bullet (•) and asterisk (\) formats New and old markdown styles Fallbacks for backward compatibility Output: Clean JSON: json { "tripInformation": {...}, "listings": [...], "totalListings": X, ... } --- Flight Search with fare (HTTP Request) API: Amadeus Flight Offers API Purpose: Searches live flight offers using: originIataCode destinationIataCode travelDate numberOfPeople Auth: OAuth2 --- Flight Data + Airbnb Listings (Code Node) Purpose: Parses Amadeus flight offers Formats date, time, and durations Merges flight results with earlier Airbnb + trip info JSON Sorts by cheapest total price Output: json { "tripInformation": {...}, "listings": [...], "allFlightOffers": [...] } --- Edit Fields (Set Node) Purpose: Assigns final response fields into clean keys: traveldetails listings flights --- Respond to Webhook Purpose: Sends back the final structured JSON response to the caller. Output: Combined travel itinerary with flights + Airbnb --- Summary This end-to-end workflow is a fully autonomous travel query-to-itinerary engine. From a plain text like “4 people from Vijayawada to Bangkok @14th August 2025,” it: Parses and understands the query using an AI agent Fetches Airbnb stays by scraping live listings Searches real-time flights via Amadeus Merges and formats everything into structured, digestible JSON No manual parsing, no frontend — just AI + APIs + automation. NOTE: I JUST USED A COMMUNITY NODE "n8n-nodes-mcp" + UNOFFICIAL AIRBNB MCP
Build a knowledge-based WhatsApp assistant with RAG, Gemini, Supabase & Google Docs
Workflow Execution Link: Watch Execution Video Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: sql CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: sql DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Knowledge Base Create a new Google Doc with the complete knowledge base about your business and replace the document ID in the "Content for the Training" node. Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Step 4: Setup Meta App for WhatsApp Business Cloud Go to https://business.facebook.com/latest/settings/apps, create an app, and select the use case "Connect with customer through WhatsApp". Copy the Client ID and Client Secret and add them to the first node. Go to that newly created META app in the app dashboard, click on the use case, and then click on "customise...". Go to the API setup, add your number, and also generate an access token on that page. Now paste the access token and the WhatsApp Business Account ID into the send message node. Part A: Document Preparation (One-Time Setup) When clicking ‘Execute workflow’ Type: manualTrigger Purpose: Manually starts the workflow for preparing training content. Content for the Training Type: googleDocs Purpose: Fetches the document content that will be used for training. Splitting into Chunks Type: code Purpose: Breaks the document text into smaller pieces for processing. Embedding Uploaded document Type: httpRequest Purpose: Converts each chunk into embeddings via an external API. Save the embedding in DB Type: supabase Purpose: Stores both the chunks and embeddings in the database for future use. --- Part B: Chat Interaction (Realtime Flow) WhatsApp Trigger Type: whatsAppTrigger Purpose: Starts the workflow whenever a user sends a WhatsApp message. If Type: if Purpose: Checks whether the incoming WhatsApp message contains text. Embend User Message Type: httpRequest Purpose: Converts the user’s message into an embedding. Search Embeddings Type: httpRequest Purpose: Finds the top matching document chunks from the database using embeddings. Aggregate Type: aggregate Purpose: Merges retrieved chunks into one context block. AI Agent Type: langchain agent Purpose: Builds the prompt combining user’s message and context. Google Gemini Chat Model Type: lmChatGoogleGemini Purpose: Generates the AI response based on the prepared prompt. Send message Type: whatsApp Purpose: Sends the AI’s reply back to the user on WhatsApp.
Automated stale user re-engagement system with Supabase, Google Sheets & Gmail
Built this workflow because most of our users signed up, then vanished after ~30 days. It runs daily, grabs those stale users from Supabase, updates a Google Sheet for tracking, and automatically sends each one a personalized HTML email through Gmail to bring them back. All fully automated — so once it’s set up, it quietly does its job in the background. Currently, it only supports Supabase, but the concept should work with any DB or API if you swap out the request node.
Automate internal complaint resolution with Jotform, Gemini AI & Google Sheets
Workflow explaination video: https://youtu.be/z1grVuNOXMk Prerequisites Before running this workflow, you need to have the following set up: JotForm: A form with fields for describing the issue and optionally naming the team member involved. Google Sheet 1 (Issue Resolver Logic): A sheet with three columns: Issue Category, Normal Resolver, and Alternate Resolver. This sheet defines who handles which type of complaint. Google Sheet 2 (Issue Logs): A sheet to store all submitted complaints. It needs columns like: Issue, The person Caused by, caseawardedto, resolveremail, emailsubject, emailbodyhtml, submitted_time, and status. Google Sheet 3 (Resolver Details): A simple sheet with two columns: resolver (e.g., "HR Team") and email (e.g., "hr@yourcompany.com"). Credentials: You need to have connected accounts (credentials) in n8n for JotForm, Google (a Service Account for Sheets and OAuth for Gmail), and a Gemini API Key. Part 1: Initial Complaint Processing This part of the workflow triggers when a new complaint is submitted, uses AI to process it, logs it, and sends an initial notification. JotForm Trigger What it is: The starting point of the workflow. How it works: It constantly listens for new submissions on your specified JotForm. When someone fills out and submits the form, this node activates and pulls in all the submitted data (like the issue description and the person involved). AI Agent What it is: The "brain" of the operation, which orchestrates several tools to make a decision. How it works: This node receives the complaint details from the JotForm Trigger. It follows a detailed prompt that instructs it to perform a sequence of tasks: Classify: Analyze the complaint description to categorize it. Reason: Use its connected "tools" to figure out the correct resolver based on your business logic. Generate: Create a complete email notification and format the final output as a JSON object. Connected Tools: Google Gemini Chat Model: This is the actual language model that provides the intelligence. The AI Agent sends its prompt and the data to this model for processing. Issue Resolver Allotment Logic Sheets tool: This allows the AI Agent to read your first Google Sheet. It can look up the issue category and find the designated "Normal Resolver" or "Alternate Resolver." Resolver Details Sheets tool: This allows the AI Agent to read your third Google Sheet. Once it knows the name of the resolver (e.g., "HR Team"), it uses this tool to find their corresponding email address. Structured Output Parser: This ensures that the AI's response is perfectly formatted into the required JSON structure (email, caseawardedto, email_subject, etc.), making it reliable for the next steps. Save Complaint (Google Sheets Node) What it is: The record-keeping step. How it works: This node takes the structured JSON output from the AI Agent and the original data from the JotForm Trigger. It then adds a new row to your second Google Sheet ("Issue Logs"), mapping each piece of data to its correct column (Issue, caseawardedto, submitted_time, etc.). Send a message (Gmail Node) What it is: The initial notification step. How it works: After the complaint is successfully logged, this node sends an email. It uses the resolveremail, emailsubject, and emailbodyhtml fields generated by the AI Agent to send a formal assignment email to the correct department or person. Part 2: Daily Follow-Up This second, independent part of the workflow runs every day to check for unresolved issues that are older than three days and sends a reminder. Schedule Trigger What it is: The starting point for the daily check-up. How it works: Instead of waiting for a user action, this node activates automatically at a predefined time each day (e.g., 10:00 AM). Get Complaint Logs (Google Sheets Node) What it is: The data gathering step for the follow-up process. How it works: When the schedule triggers, this node reads all the rows from your "Issue Logs" Google Sheet, bringing every recorded complaint into the workflow for evaluation. If Node What it is: The decision-making step. How it works: This node examines each complaint passed to it from the previous step one by one. For each complaint, it performs a calculation: it finds the difference in days between the submitted_time and the current date. If that difference is greater than or equal to 3, the complaint is passed on to the next step. Otherwise, the workflow stops for that complaint. Send a message1 (Gmail Node) What it is: The reminder email step. How it works: This node only receives complaints that met the "3 days or older" condition from the If node. For each of these old complaints, it sends a follow-up email to the resolver_email. The email body is dynamic, mentioning how many days have passed and including the original issue description to remind the resolver of the pending task.
Reddit to Google Sheets: tracking freelance/job leads
🧩 n8n Workflow Overview: Goal: Get Reddit posts from specific subreddits, filter those mentioning freelance/gigs and n8n, extract top-level comments, remove mod replies, and store everything into Google Sheets. --- ⚙️ Step-by-step Node Explanation Start (Trigger) Type: Cron node Runs: Every 2 hours Purpose: Starts the workflow at regular intervals --- HTTP Request - Get Posts from Reddit Type: HTTP Request Method: GET Auth: OAuth2 (Reddit App) Purpose: Pulls the 10 latest posts from any subreddits of your choice --- Filter Relevant Posts Type: IF Node Purpose: Filters out noise, keeps only potential job leads --- HTTP Request - Get Post Comments Type: HTTP Request Auth: OAuth2 Purpose: Gets full comment thread for each post --- Extract Top-Level Comments Type: Function Node Purpose: Code filters only top-level comments (ignores nested ones) --- Remove Mod Comments Type: IF Node Purpose: Excludes mod replies that are usually auto-messages or rule enforcement --- Format Clean Data Type: Set Node Fields captured: Subreddit Post Title Post URL Comment Body Reddit Username Timestamp --- Append to Google Sheets Type: Google Sheets Node Operation: Append Row Sheet: Pre-created sheet with matching column names Purpose: Logs everything into your spreadsheet neatly --- 💡 Bonus Logic: If a post has no comment → adds a blank Runs smoothly with Reddit’s OAuth2 (no scraping) All tools used are free-tier --- 📹 See It in Action: I posted a quick video walkthrough on YouTube (no audio, just execution): 👉 https://youtu.be/JsUVVhYm8p4
Reddit freelance job monitor with Google Sheets tracking and Telegram alerts
What It Does This n8n workflow monitors Reddit for freelance job posts related to n8n and sends alerts via Telegram while logging relevant data in Google Sheets. It filters out duplicates and only stores unique, paid opportunities. --- Workflow Steps Schedule Trigger Runs every 5 minutes. Reddit Search Sends a query to Reddit API for freelance-related keywords. Extract Post Metadata Parses out relevant data from Reddit response. Separate Posts Splits array into individual post items. Filter Paid Jobs Matches posts with keywords like "hiring", "paid", "job", etc. Check Existing Records Pulls already logged post IDs from Google Sheets. Filter Unique Posts Uses JavaScript to compare incoming posts with the existing ones and filters out duplicates. Get UTC Date Converts Reddit created_utc timestamp into readable format. Save to Google Sheets Appends unique posts with these fields: id title url flair created_utc Send Telegram Alert Sends a formatted notification with clickable links and meta info. --- Required Credentials Reddit OAuth2 (application with access to search) Google Sheets API (via service account) Telegram Bot API (bot token, chat ID) --- Google Sheet Configuration Make sure the sheet has these columns: id (used for deduplication) title url flair created_utc > First row must be the header. --- How to Set It Up Reddit API Create a Reddit App at https://www.reddit.com/prefs/apps Set up OAuth2 credentials in n8n with correct scopes Google Sheets Create a new Google Sheet Share with your service account email Add required columns in the first row Telegram Create a bot via BotFather Copy the bot token Use a private chat or group, get the chat ID --- Notes Interval: Adjust schedule node to change frequency Search Query: You can customize keywords in the Reddit API URL Profanity: Clean output only; no slang or offensive words --- Final Output Google Sheets: A log of all new freelance Reddit posts Telegram: Instant lead alerts for new job posts ---
Monitor brand reputation for negative PR on Reddit with Gemini + LangChain + Sheets
🎥 Watch the Full Execution on YouTube Workflow Description (in-depth): This workflow automates the entire process of monitoring online reputation by scanning Reddit posts for negative sentiment about your company, filtering only the relevant criticism, and logging it directly into Google Sheets for easy tracking. 🔑 Key Features: Automated Triggering: Runs on a schedule so you never miss new discussions about your brand. Smart Data Fetching: Uses Reddit API to pull the latest posts matching your chosen keywords (e.g., “Notion”). Post Processing: Breaks down bulk Reddit responses into individual posts for analysis. AI-Powered Filtering: A custom-built AI agent reviews the content and extracts only genuine negative PR (complaints, bad experiences, harmful mentions). Neutral or positive posts are ignored. Structured Parsing: AI responses are enforced into a clean JSON schema (ID, Title, URL, NegativeContent).... ensuring compatibility with downstream nodes. Noise Reduction: A code node ensures only posts with meaningful content are passed forward. Centralized Logging: Captures critical information (post title, negative excerpt, URL, ID) into Google Sheets.... giving teams a live dashboard of issues to address. Customizable & Scalable: Swap out “Notion” for your own brand, or expand to other platforms.... the flow adapts without extra overhead. 🚀 Why This Matters: In 2025, a single Reddit thread can shape brand reputation overnight. Manually monitoring is inefficient, and traditional social listening tools generate too much noise. This workflow gives you a lean, automated, AI-powered system to stay ahead of potential PR risks. 🛠️ Tech Stack: n8n for automation Reddit API for data fetching LangChain AI Agent + Google Vertex Model for sentiment & context analysis Google Sheets for reporting & tracking