Doctor appointment management system with Gemini AI, WhatsApp, Stripe & Google Sheets
WhatsApp AI Assistant for Clinic Appointment Booking Automate your entire appointment lifecycle with an intelligent AI assistant that lives on WhatsApp. This workflow empowers any clinic or independent practitioner to manage bookings, take payments, and send reminders without manual intervention, using Google Sheets as a simple database. This template handles everything from the initial booking conversation to sending the final reminder, allowing you to focus on your patients, not your schedule. Features 🤖 Conversational AI Booking: A Google Gemini-powered agent guides patients through booking, rescheduling, and canceling appointments in a natural, friendly chat. 🗓️ Smart Scheduling: The AI checks for available slots based on your working hours and existing appointments in Google Sheets, preventing double-bookings. 🔔 Automated Reminders: A daily trigger automatically sends WhatsApp reminders to patients for their appointments on that day, reducing no-shows. 💳 Seamless Payments: Integrated with Stripe to handle online payments. The workflow sends a confirmation message upon successful payment. 📊 Centralized Management: Uses a single Google Sheet with separate tabs for Patients, Appointments, and Configuration, making it easy to view and manage all your data. 🔄 Easy Rescheduling & Cancellations: Patients can manage their own bookings through the WhatsApp chat, and all changes are instantly reflected in your Google Sheet. Prerequisites Before you begin, you will need the following accounts and credentials: n8n Account: A running n8n instance (cloud or self-hosted). WhatsApp Business Account: Access to the WhatsApp Cloud API. Google Account: For using Google Sheets and the Google Gemini (AI) API. Stripe Account: To process online payments. n8n Credentials: You must configure credentials in your n8n instance for: WhatsApp Cloud API Google Sheets (OAuth2) Google Gemini API Stripe Setup Instructions Follow these steps carefully to get your automated assistant up and running. Step 1: Set Up Your Google Sheet This workflow relies on a specific Google Sheet structure. Create a new Google Sheet. Rename the sheet to something memorable, like "Clinic Appointments". Create three tabs at the bottom with the exact names: Patients, Appointments, and Config. Set up the columns for each tab as follows (the header names must be an exact match): Patients tab: patientid, whatsappnumber, name, age, gender Appointments tab: appointmentid, patientid, whatsappnumber, date, time, paymentmethod, paymentstatus, status, stripepayment_intent Config tab: key, value (Add a row with working_hours as the key and a value like 10:00-18:00) Step 2: Configure the Workflow Nodes Now, you'll link the workflow to your accounts and your new Google Sheet. Update All Google Sheets Nodes: Go through every Google Sheets node in the workflow (e.g., "Get Appointment sheet", "Add Patient", "Google Sheets Trigger") and do the following: Select your Google Sheets credential from the dropdown. In the Document ID field, paste the ID of your Google Sheet. Select the correct sheet (Appointments, Patients, etc.) from the Sheet Name dropdown. Update All WhatsApp Nodes: Go through every WhatsApp node (e.g., "Send message", "WhatsApp Trigger", "Send Payment Confirmation"): Select your WhatsApp credential. Enter your Phone Number ID from your Meta for Developers account. Update AI & Stripe Nodes: In the Google Gemini Chat Model nodes, select your Google Gemini credential. In the Stripe Trigger node, select your Stripe credential. Step 3: Activate the Workflow Click Save to apply your changes. Click the Activate toggle in the top-right corner to turn the workflow on. Your AI appointment assistant is now live! Send a message to your WhatsApp number to begin testing. Customization Change Reminder Time: To change the daily reminder time, open the Schedule Trigger node and adjust the hour from the default of 8 AM. Edit AI Personality: To modify how the AI communicates, edit the system message in the AI Agent node (the one connected to the WhatsApp Trigger). You can change its tone, instructions, or language.
Backs up n8n workflows to NextCloud
Temporary solution using the undocumented REST API for backups with file versioning (Nextcloud)
Extract and merge Twitter (X) threads using TwitterAPI.io
Twitter (X) Thread Fetcher: Extract and Merge Tweets from Threads What it does Thread Detection: Automatically detects whether the provided Twitter link is a single tweet or a thread. Tweet Extraction: Fetches and returns the content of a single tweet, or gathers all tweets in a thread by retrieving the first tweet and all connected replies. Thread Merging: Merges all tweets in the correct order to reconstruct the complete thread, filtering out any empty results. Seamless Integration: Easily trigger this workflow from other n8n workflows to automate Twitter thread extraction from various sources. How it works Accepts a Twitter link as input-either a single tweet or a thread. If the link is for a single tweet, fetches and returns the tweet content. If the link is for a thread, fetches the first tweet, then iteratively retrieves all connected replies that form the thread, ensuring only relevant tweets are included. Merges the first tweet and all subsequent thread tweets in order, filters out any empty results, and returns the complete thread. Uses twitterapi.io for all Twitter API requests. Set up steps Setup typically takes just a few minutes. You’ll need to configure your Twitter API credentials for twitterapi.io. You can trigger this workflow manually for testing or call it from another workflow to automate thread fetching from sources like Notion, spreadsheets, or other platforms. For best results, create a separate workflow to gather Twitter links from your preferred source, then trigger this workflow to fetch and return the full thread. > Detailed configuration instructions and node explanations are included as sticky notes within the workflow canvas. Benefits Light speed: Fetches a 15-tweet thread in just 3 seconds for rapid results. Cost effective: Processes a 15-tweet thread for only $0.0027, making it highly affordable. (Cost may vary depending on the density of replies in the thread.)
Monitor changes in Google Sheets every 45 mins
Based on your use case, you might want to trigger a workflow if new data gets added to your database. This workflow allows you to send a message to Mattermost when new data gets added in Google Sheets. The Interval node triggers the workflow every 45 minutes. You can modify the timing based on your use case. You can even use the Cron node to trigger the workflow. If you wish to fetch new Tweets from Twitter, replace the Google Sheet node with the respective node. Update the Function node accordingly.
Generate 3D models & textures from images with Hunyuan3D AI
Generate 3D Models & Textures from Images with Hunyuan3D AI This workflow connects n8n → Replicate API to generate 3D-like outputs using the ndreca/hunyuan3d-2.1-test model. It handles everything: sending the request, waiting for processing, checking status, and returning results. --- ⚡ Section 1: Trigger & Setup ⚙️ Nodes 1️⃣ On Clicking “Execute” What it does: Starts the workflow manually in n8n. Why it’s useful: Great for testing or one-off runs before automation. 2️⃣ Set API Key What it does: Stores your Replicate API Key. Why it’s useful: Keeps authentication secure and reusable across HTTP nodes. --- 💡 Beginner Benefit No coding needed — just paste your API key once. Easy to test: press Execute, and you’re live. --- 🤖 Section 2: Send Job to Replicate ⚙️ Nodes 3️⃣ Create Prediction (HTTP Request) What it does: Sends a POST request to Replicate’s API with: Model version (70d0d816...ae75f) Input image URL Parameters like steps, seed, generatetexture, removebackground Why it’s useful: This kicks off the AI generation job on Replicate’s servers. 4️⃣ Extract Prediction ID (Code) What it does: Grabs the prediction ID from the API response and builds a status-check URL. Why it’s useful: Every job has a unique ID — this lets us track progress later. --- 💡 Beginner Benefit You don’t need to worry about JSON parsing — the workflow extracts the ID automatically. Everything is reusable if you run multiple generations. --- ⏳ Section 3: Poll Until Complete ⚙️ Nodes 5️⃣ Wait (2s) What it does: Pauses for 2 seconds before checking the job status. Why it’s useful: Prevents spamming the API with too many requests. 6️⃣ Check Prediction Status (HTTP Request) What it does: GET request to see if the job is finished. 7️⃣ Check If Complete (IF Node) What it does: If status = succeeded → process results. If not → loops back to Wait and checks again. --- 💡 Beginner Benefit Handles waiting logic for you — no manual refreshing needed. Keeps looping until the AI job is really done. --- 📦 Section 4: Process the Result ⚙️ Nodes 8️⃣ Process Result (Code) What it does: Extracts: status output (final generated file/URL) metrics (performance stats) Timestamps (createdat, completedat) Model info Why it’s useful: Packages the response neatly for storage, email, or sending elsewhere. --- 💡 Beginner Benefit Get clean, structured data ready for saving or sending. Can be extended easily: push output to Google Drive, Notion, or Slack. --- 📊 Workflow Overview | Section | What happens | Key Nodes | Benefit | | --------------------- | --------------------------------- | ----------------------------- | --------------------------------- | | ⚡ Trigger & Setup | Start workflow + set API key | Manual Trigger, Set | Easy one-click start | | 🤖 Send Job | Send input & get prediction ID | Create Prediction, Extract ID | Launches AI generation | | ⏳ Poll Until Complete | Waits + checks status until ready | Wait, Check Status, IF | Automated loop, no manual refresh | | 📦 Process Result | Collects output & metrics | Process Result | Clean result for next steps | --- 🎯 Overall Benefits ✅ Fully automates Replicate model runs ✅ Handles waiting, retries, and completion checks ✅ Clean final output with status + metrics ✅ Beginner-friendly — just add API key + input image ✅ Extensible: connect results to Google Sheets, Gmail, Slack, or databases --- ✨ In short: This is a no-code AI image-to-3D content generator powered by Replicate and automated by n8n. ---
Create daily newsletter digests from Gmail using GPT-4.1-mini
Summary Every day at a set time, this workflow fetches yesterday’s newsletters from Gmail, summarizes each email into concise topics with an LLM, merges all topics, renders a clean HTML digest, and emails it to your inbox. What this workflow does Triggers on a daily schedule (default 16:00, server time) Fetches Gmail messages since yesterday using a custom search query with optional sender filters Retrieves and decodes each email’s HTML, subject, sender name, and date Prompts an LLM (GPT‑4.1‑mini) to produce a consistent JSON summary of topics per email Merges topics from all emails into a single list Renders a styled HTML email with enumerated items Sends the HTML digest to a specified recipient via Gmail Apps and credentials Gmail OAuth2: Gmail account (read and send) OpenAI: OpenAi account Typical use cases Daily/weekly newsletter rollups delivered as one email Curated digests from specific media or authors Team briefings that are easy to read and forward How it works (node-by-node) Schedule Trigger Fires at the configured hour (default 16:00). Get many messages (Gmail → getAll, returnAll: true) Uses a filter like: =(from:@.com) OR (from:@.com) OR (from:@.com -"") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Returns a list of message IDs from the past day. Loop Over Items (Split in Batches) Iterates through each message ID. Get a message (Gmail → get) Retrieves the full message/payload for the current email. Get message data (Code) Extracts HTML from Gmail’s MIME parts. Normalizes the sender to just the display name. Formats the date as DD.MM.YYYY. Passes html, subject, from, date forward. Clean (Code) Converts DD.MM.YYYY → MM.DD (for prompt brevity). Passes html, subject, from, date to the LLM. Message a model (OpenAI, model: gpt‑4.1‑mini, JSON output) Prompt instructs: Produce JSON: { "topics": [ { "title", "descr", "subject", "from", "date" } ] } Split multi-news blocks into separate topics Combine or ignore specific blocks for particular senders (placeholders ) Keep subject untranslated; other values in language Injects subject/from/date/html from the current email Loop Over Items (continues) Processes all emails for the time window. Merge (Code) Flattens the topics arrays from all processed emails into one combined topics list. Create template (Code) Builds a complete HTML email: Enumerated items with title, one-line description Original subject and “from — date” Safely escapes HTML and preserves line breaks Inline, email-friendly styles Send a message (Gmail → send) Sends the final HTML to your recipient with a custom subject. Node map | Node | Type | Purpose | |---|---|---| | Schedule Trigger | Trigger | Run at a specific time each day | | Get many messages | Gmail (getAll) | Search emails since yesterday with filters | | Loop Over Items | Split in Batches | Iterate messages one-by-one | | Get a message | Gmail (get) | Fetch full message payload | | Get message data | Code | Extract HTML/subject/from/date; normalize sender and date | | Clean | Code | Reformat date and forward fields to LLM | | Message a model | OpenAI | Summarize email into JSON topics | | Merge | Code | Merge topics from all emails | | Create template | Code | Render a styled HTML email digest | | Send a message | Gmail (send) | Deliver the digest email | Before you start Connect Gmail OAuth2 in n8n (ensure it has both read and send permissions) Add your OpenAI API key Import the provided workflow JSON into n8n Setup instructions 1) Schedule Schedule Trigger node: Set your preferred hour (server time). Default is 16:00. 2) Gmail Get many messages: Adjust filters.q to your senders/labels and window: Example: =(from:news@publisher.com) OR (from:briefs@media.com -"promo") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} You can use label: or category: to narrow scope. Send a message: sendTo = your email subject = your subject line message = set to {{ $json.htmlBody }} (already produced by Create template) The HTML body uses inline styles for broad email client support. 3) OpenAI Message a model: Model: gpt‑4.1‑mini (swap to gpt‑4o‑mini or your preferred) Update prompt placeholders: language → your target language sender rules → special cases (combine blocks, ignore sections) How to use The workflow runs daily at the scheduled time, compiling a digest from yesterday’s emails. You’ll receive one HTML email with all topics neatly listed. Adjust the time window or filters to change what gets included. Customization ideas Time window control: after: {{ $now.minus({ days: X }) }} and/or add before: Filter by labels: q = label:Newsletters after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Language: Set the language in the LLM prompt Template: Edit “Create template” to add a header, footer, hero section, logo/branding Include links parsed from HTML (add an HTML parser step in “Get message data”) Subject line: Make dynamic, e.g., “Digest for {{ $now.toFormat('dd.MM.yyyy') }}” Sender: Use a dedicated Gmail account or alias for deliverability and separation Limits and notes Gmail size limit for outgoing emails is ~25 MB; large digests may need pruning LLM usage incurs cost and latency proportional to email size and count HTML rendering varies across clients; inline styles are used for compatibility Schedule uses the n8n server’s timezone; adjust if your server runs in a different TZ Privacy and safety Emails are sent to OpenAI for summarization—ensure this aligns with your data policies Limit the Gmail search scope to only the newsletters you want processed Avoid including sensitive emails in the search window Sample output (email body) Title 1 One-sentence description Original Subject → Sender — DD.MM.YYYY Title 2 One-sentence description Original Subject → Sender — DD.MM.YYYY Tips and troubleshooting No emails found? Check filters.q and the time window (after:) Model returns empty JSON? Simplify the prompt or try another model Odd characters in output? The template escapes HTML and preserves line breaks; verify your input encoding Delivery issues? Use a verified sender, set a clear subject, and avoid spammy keywords Tags gmail, openai, llm, newsletters, digest, summarization, email, automation Changelog v1: Initial release with scheduled time window, sender filters, LLM summarization, topic merging, and HTML email template rendering
Auto-reply to FAQs on WhatsApp using Postgres (Module "FAQ")
Who is this for? This workflow is designed for businesses or individuals who want to automate responses to frequently asked questions via WhatsApp, while managing their question-and-answer database using Postgres. What problem is this workflow solving? This workflow addresses the challenge of efficiently managing and automating responses to customer inquiries. It helps reduce manual effort and ensures quick access to information, while providing an option for customers to request live assistance when needed. What this workflow does It allows you to store questions and answers in a Postgres database. A link to a bot is shared with customers, enabling them to read the available questions and answers. If a customer does not find an answer to their query, they can request a consultation with a manager. Setup Create tables in Postgres DB Replace "n8n" in the provided SQL script with the name of your schema in the database. Run the SQL script in your database to set up the necessary tables. (The script is available in the workflow.) Add Credentials Set up the WhatsApp integration (OAuth and Account credentials). Connect the Postgres database by adding the necessary credentials. How to customize this workflow to your needs Modify the Postgres schema name in the SQL script to match your database configuration. Update the questions and answers in the database to suit the information you want to provide via the bot. Customize the WhatsApp integration settings to match your account credentials and API details.
Ai email auto-responder system- AI RAG agent for email inbox
AI Email Auto-Responder – Smart Client Reply Automation with RAG This workflow is built for individuals, teams, and businesses that receive regular inquiries via email and want to automate responses in a way that’s intelligent, brand-aligned, and always up to date. Its core purpose is to generate high-quality, professional email replies using internal company data, brand voice, and semantic search — fully automated through Gmail, Pinecone, and OpenAI. The system is divided into three steps. First, it allows you to index your internal knowledge base (Docs, Sheets, PDFs) with embeddings. Second, it injects a consistent brand brief into every interaction to ensure tone and positioning. Finally, the main flow listens for incoming emails, understands the user query, retrieves all needed data, and writes a full HTML reply — sending it directly to the original thread via Gmail. This solution is ideal for support teams, solopreneurs, B2B service providers, or anyone looking to scale high-quality client communication without scaling manual work. It can be extended to handle multilingual queries, intent routing, or CRM logging. How it works When a new email arrives in Gmail, the workflow checks whether it's a valid client inquiry. If so, it: Extracts the subject and message content Sends the message through OpenAI to understand the question Queries a Pinecone vector database (populated via a separate embedding workflow) to find relevant internal knowledge Loads a brand brief from a Google Doc or Notion block Combines retrieved data and brand context to generate a clear, structured HTML reply using OpenAI Sends the reply via Gmail and logs the message This process ensures every reply is relevant, accurate, and consistent with your brand — and takes under 10 seconds. Set up steps Getting started takes about 30–60 minutes. Create three workflows: one for embedding documents (Step 1), one sub-workflow for the brand brief (Step 2), and one main responder flow (Step 3) Connect the following APIs: Gmail (OAuth2), OpenAI, Pinecone, Google Drive, and optionally Notion Replace all placeholders: folder ID in Google Drive, Pinecone index and namespace, your brand brief URL or doc ID, and Gmail credentials Test your embedding workflow by uploading a document and verifying its presence in Pinecone Trigger the responder by sending an email and reviewing the AI’s reply Detailed setup instructions are stored in sticky notes within each workflow to guide you through configuration.
Repurpose YouTube videos and publish via Blotato with Telegram, Sheets and GPT-4.1-mini
💥 Automate YouTube Video Creation and Publishing with Blotato Who is this for? This workflow is designed for YouTube creators, content marketers, automation builders, and agencies who want to repurpose existing YouTube videos into new original content and automate the publishing process. It is especially useful for users already working with Telegram, Google Sheets, OpenAI, and Blotato. --- What problem is this workflow solving? / Use case Creating YouTube content at scale is time-consuming: extracting ideas from existing videos, rewriting scripts, generating SEO metadata, tracking content, and publishing videos all require manual work across multiple tools. This workflow solves that by: Automating content analysis and rewriting Centralizing tracking and approvals in Google Sheets Automating YouTube publishing via Blotato --- What this workflow does This workflow automates the full YouTube video repurposing and publishing pipeline: Receives a YouTube video URL and instructions via Telegram Logs the request in Google Sheets Extracts the YouTube video ID Retrieves the video transcript via RapidAPI Cleans and normalizes the transcript Generates a new original video script using OpenAI Generates SEO metadata (title, description, tags) in strict JSON format Updates Google Sheets with the generated content Waits for approval (status = ready) Uploads the final video to Blotato Publishes the video on YouTube Updates the status to publish in Google Sheets --- Setup To use this workflow, you need to configure the following services: Google Services Enable Google Sheets API in Google Cloud Console Create OAuth2 credentials Add credentials in n8n: Google Sheets OAuth2 API Credential name: Google Sheets account My Google Sheets : copy RapidAPI (YouTube Transcript) Sign up at RapidAPI Subscribe to "YouTube Video Summarizer GPT AI" Get your API key Update in Workflow Configuration node BLOTATO (Video Publishing) Sign up at BLOTATO Get API credentials Add credentials in n8n: Blotato API Credential name: Blotato account Connect your YouTube account via BLOTATO --- How to customize this workflow to your needs You can easily adapt this workflow by: Changing the output language (output_lang) in the configuration node Modifying the OpenAI prompts to match your tone or niche Adjusting Google Sheets columns or approval logic Replacing YouTube with another platform supported by Blotato Extending the workflow to generate shorts, reels, or multi-platform posts The workflow is modular and designed to be extended without breaking the core logic. 🎥 Watch This Tutorial --- 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n --- 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
Implement recursive algorithms with sub-workflows: Towers of Hanoi demo
How it works This is an example of using sub-workflow nodes and a proof of concept showing that it’s possible to solve and explain recursive problems with n8n. Towers of Hanoi - Task Move a stack of n disks from rod A to rod C, using rod B as auxiliary. Only one disk can be moved at a time, and no disk may be placed on a smaller disk. Example n=4 | | | = | | === | | ===== | | ======= | | --------- --------- --------- A B C Algorithm procedure Hanoi(n, X, Y, Z): if n == 1: move disk from X to Z else: Hanoi(n-1, X, Z, Y) move disk from X to Z Hanoi(n-1, Y, X, Z) Notes This is a learning example. In a real scenario, you would probably use an iterative approach with only a single code node. When experimenting with recursion, make sure to define a termination condition first. Also, be aware of the "Restart workspace" link in the n8n Dashboard. Learn more about Recursion on Wikipedia&oldid=1301600240TowersofHanoi). Set up steps Optional: Set "numberOfDiscs" in node "Set number of discs" Execute workflow
Auto-track Amazon prices with Google Gemini & send alerts to Telegram
AI-Powered Amazon Price Tracker to Telegram Overview Automate your deal hunting with this intelligent Amazon price tracker. This workflow uses the power of AI to monitor any Amazon product page at regular intervals. When the price drops to or below your desired target, it instantly sends a notification to your Telegram chat. Say goodbye to manual price checking and never miss a sale, a lightning deal, or a Black Friday bargain again. Unlike traditional scrapers that break when a website's layout changes, this workflow uses a Large Language Model (Google Gemini) to understand the page content, making it significantly more robust and reliable. 🚀 Key Features AI-Powered Data Extraction: Leverages Google Gemini to intelligently read the product page and extract the name and price, making it resilient to Amazon's frequent layout updates. Scheduled Automation: Set it up once with a schedule (e.g., every hour) and let it run in the background. Instant Telegram Alerts: Get real-time notifications directly in Telegram the moment a price drops to your target. Centralized & Easy Configuration: A single Set node holds all your settings—product URL, target price, and Telegram Chat ID—for quick and easy updates. Built-in Error Handling: Intelligently checks if data was extracted correctly and sends an error alert if it fails, so you're always in the loop. Cost-Efficient Processing: Includes a pre-processing step to clean and simplify the page's HTML, reducing the amount of data sent to the AI and lowering potential token costs. ⚙️ How It Works The workflow follows a clear, linear process from scheduling to notification. Initiation and Configuration A Schedule node triggers the workflow automatically at your chosen interval (e.g., every hour). A Set node named Config: Product & Alert acts as your control panel. Here, you define the Amazon product URL, your target price, and your Telegram Chat ID. Fetch and Clean Product Page An HTTP Request node fetches the full HTML content of the Amazon product URL. A Code node then cleans this raw HTML. It strips out unnecessary scripts, styles, and tags, leaving only the core text content. This crucial step makes the data easier for the AI to analyze accurately and efficiently. AI-Powered Data Extraction An AI Agent node sends the cleaned text to the Google Gemini model. It uses a specific prompt to ask the AI to identify and extract the product's name (productName) and its current price (priceValue) as a number. A Structured Output Parser ensures the AI returns the data in a clean, predictable JSON format (e.g., {"productName": "...", "priceValue": 49.99}), which is essential for the next steps. Validate and Compare Price An IF node first validates the AI's output, checking if a valid price was successfully extracted. If not, it routes the workflow to send an error message. If the data is valid, a second IF node compares the extracted priceValue with your priceTarget from the configuration node. Prepare and Send Telegram Notification If the current price is less than or equal to your target price, the workflow proceeds down the "true" path. A Set node constructs a formatted, user-friendly success message including the product name, the new low price, and a direct link to the product page. Finally, a Telegram node sends this prepared message to your specified Chat ID. If an error occurred at any stage, a corresponding error message is sent instead. 🛠️ Setup Steps & Credentials To get this workflow running, you'll need to configure the following: Google Gemini: You need a Google AI (Gemini) API Key. Create a Google Gemini(PaLM) Api credential in n8n. Assign this credential to the Google Gemini Chat Model node. Telegram: You need a Telegram Bot Token. Get one by talking to the @BotFather on Telegram. You also need your personal Chat ID. You can get this from a bot like @userinfobot. Create a Telegram credential in n8n with your Bot Token. Assign this credential to both the Send Success and Send Error Telegram nodes. Workflow Configuration: Open the Config: Product & Alert node. Replace the placeholder values: telegramChatId: Paste your Telegram Chat ID. amazonUrl: Paste the full URL of the Amazon product you want to track. priceTarget: Set your desired price (e.g., 49.99). Once configured, save the workflow and activate it using the toggle in the top-right corner. 💡 Customization & Learning This workflow is a powerful template that you can easily adapt and expand: Track Multiple Products: Modify the workflow to read product URLs and target prices from a Google Sheet or Airtable base to monitor many items at once. Add More Notification Channels: Duplicate the Telegram node and add Discord, Slack, or Email nodes to receive alerts on multiple platforms. Store Price History: Connect a Google Sheets, Airtable, or database node after the AI extraction step to log the price of the product over time, creating a historical price chart. Switch AI Models: Easily swap the Google Gemini node for an OpenAI or Anthropic model by adjusting the prompt and credentials.
Automated candidate screening & response using GPT-4, Mistral OCR and Slack notifications
📊 Description Streamline your HR recruitment process with this intelligent automation that reads candidate emails and resumes, analyzes them using GPT-4, and automatically shortlists or rejects applicants based on skill and experience match. 📩🤖 The workflow updates your HR Google Sheet with detailed AI evaluations, notifies recruiters on Slack about high-scoring candidates, and sends personalized shortlist or rejection emails to applicants — all in one seamless flow. 🚀 What This Template Does 1️⃣ Trigger – Monitors the HR Gmail inbox for new job applications with attachments. 📬 2️⃣ Extracts Resume Data – Uploads attached resumes to Mistral OCR to extract text for analysis. 📄 3️⃣ Combines Inputs – Merges candidate email data and resume content for complete context. 🔗 4️⃣ AI Evaluation – GPT-4 analyzes the candidate’s qualifications against job requirements in a connected Google Sheet. 🧠 5️⃣ Scoring & Recommendation – Generates a structured JSON output with job fit summary, skill match, AI score, and recommendation (Shortlist or Reject). 📊 6️⃣ Record Update – Logs AI evaluation results in a Google Sheet for centralized tracking. 📋 7️⃣ Communication – Sends professional shortlist or rejection emails to applicants via Gmail. 💌 8️⃣ Team Alert – Notifies HR on Slack when a high-scoring candidate is detected. 🔔 Key Benefits ✅ Saves hours of manual resume screening and sorting ✅ Ensures consistent, unbiased candidate evaluation ✅ Provides detailed AI-driven insights for every applicant ✅ Automates communication and record-keeping ✅ Improves HR productivity and response speed Features Gmail trigger for new candidate emails Resume text extraction via Mistral OCR API GPT-4–powered resume and email evaluation Integration with Google Sheets for HR requirement mapping Slack notifications for shortlisted candidates Automated shortlist/rejection emails with custom templates Structured AI output for analytics and reporting Requirements Gmail OAuth2 credentials for inbox and email automation Google Sheets OAuth2 credentials with edit access OpenAI API key (GPT-4 or GPT-4o-mini) Slack Bot token with chat:write permissions Mistral AI OCR API key for resume text extraction Target Audience HR and recruitment teams managing large applicant volumes 🧑💼 Talent acquisition managers looking for AI-driven screening 🤖 Organizations standardizing hiring communication 💬 Agencies building automated candidate evaluation systems 📈 Step-by-Step Setup Instructions 1️⃣ Connect your Gmail account and configure the inbox trigger. 2️⃣ Add Mistral API credentials for resume OCR extraction. 3️⃣ Set up your Google Sheet with job role requirements and access credentials. 4️⃣ Add OpenAI credentials (GPT-4 or GPT-4o-mini) for AI evaluation. 5️⃣ Configure Slack credentials and HR channel ID for alerts. 6️⃣ Test with a sample application to ensure correct data mapping. 7️⃣ Activate the workflow to start automated recruitment processing. ✅