Back to Catalog

Templates by Trung Tran

Bulk resume screening & JD matching with GPT-4 for HR teams

TalentFlow AI – Bulk Resume Screening with JD Matching [](https://www.youtube.com/watch?v=MD1krFvVKdU) Automatically extract, evaluate, and shortlist multiple resumes against a selected job description using GPT-4. This smart, scalable n8n workflow helps HR/TA teams streamline hiring decisions while keeping results structured, auditable, and easy to share. --- πŸ‘€ Who’s it for This workflow is designed for: HR or Talent Acquisition (TA) teams handling multiple candidates per role Recruiters who want AI-assisted resume screening to save time and reduce bias Organizations that want to automatically log evaluations and keep stakeholders updated in real-time via Slack or Sheets --- βš™οΈ How it works / What it does HR/TA uploads multiple candidate resumes and selects a job role Each resume is: Uploaded to Google Drive Parsed with GPT-4 to extract structured profile data The job description for the selected role is: Retrieved from Google Sheets Downloaded from Drive and parsed The profile + JD are sent to an AI agent to generate: Fit score Strengths & gaps Final recommendation Results are: Appended to the evaluation tracking sheet Optionally shared with the hiring team on Slack Used to trigger emails to qualified or unqualified candidates --- πŸ› οΈ How to set up Clone or import the workflow into your n8n instance Connect your integrations: Google Sheets (positions & evaluation form) Google Drive (CV & JD folders) OpenAI API (GPT-4) Slack (for notifications) (Optional) SendGrid or SMTP for email notifications Update Google Sheets structure: Positions sheet: maps Job Role β†’ JD file link Evaluation form: stores evaluation results Prepare Drive folders: /cv folder for uploaded resumes /jd folder for job description PDFs --- πŸ“‹ Requirements βœ… n8n (hosted or self-hosted) βœ… OpenAI GPT-4 account (used in Profile & JD evaluator agents) βœ… Google Drive + Google Sheets access βœ… Slack workspace + bot token (Optional) SendGrid or email credentials for candidate follow-up --- 🎨 How to customize the workflow Change the fit score threshold in the Candidate qualified? node Edit Slack message content/formatting to match your company tone Add additional candidate metadata to Sheets or Slack messages Use a webhook trigger to integrate with your ATS or job board Swap GPT-4 with Claude or Gemini if you prefer other AI services Expand to include multi-position batch screening logic --- Happy Hiring! πŸš€ Automated with love using n8n

Trung TranBy Trung Tran
2418

Track expenses from Telegram to Google Sheets with GPT-4.1 Mini

πŸ“’ Telegram Expense Tracker to Google Sheets with GPT-4.1 .jpg) > A lightweight automation that lets users log daily expenses via a Telegram bot and instantly saves them into Google Sheetsβ€”perfect for anyone looking to manage spending on the go with AI-powered structure and ease. πŸ‘€ Who’s it for This workflow is for anyone who wants to log their daily expenses by simply chatting with a Telegram bot. Ideal for: Individuals who want a quick way to track spending Freelancers who log receipts and purchases on the go Teams or small business owners who want lightweight expense capture βš™οΈ How it works / What it does User sends a text message on Telegram describing an expense (e.g., β€œBought coffee for 50k at Highlands”) Message format is validated If the message is text, it proceeds to GPT-4.1 Mini for processing. If it's not text (e.g. image or file), the bot sends a fallback message. OpenAI GPT-4.1 Mini parses the message and returns: relevant: true/false expense_record: structured fields (date, amount, currency, category, description, source) message: a friendly confirmation or fallback If valid: The bot replies with a fun acknowledgment The data is saved to a connected Google Sheet If invalid: A fallback message is sent to encourage proper input πŸ› οΈ How to set up Telegram Bot Setup Create a bot using BotFather on Telegram Copy the bot token and paste it into the Telegram Trigger node Google Sheet Setup Create a Google Sheet with these columns: Date | Amount | Currency | Category | Description | SourceMessage Share the sheet with your n8n service account email OpenAI Configuration Connect the OpenAI Chat Model node using your OpenAI API key Use GPT-4.1 Mini as the model Apply a system prompt that extracts structured JSON with: relevant, expense_record, and message Add Parser Use the Structured Output Parser node to safely parse the JSON response Conditional Logic Nodes Is text message? Checks if the message is in text format Supported scenario? Checks if relevant = true in the LLM response Final Actions If relevant: Send confirmation via Telegram Append row to Google Sheet If not relevant: Send fallback message via Telegram βœ… Requirements Telegram bot token OpenAI GPT-4.1 Mini API access n8n instance (self-hosted or cloud) Google Sheet with access granted to n8n Basic understanding of n8n node configuration 🧩 How to customize the workflow | Feature | How to Customize | |----------------------------------|-------------------------------------------------------------------| | Add multi-currency support | Update system prompt to detect and extract different currencies | | Add more categories | Modify the list of categories in the system prompt | | Track multiple users | Add username or chat ID column to the Google Sheet | | Trigger alerts | Add Slack, Email, or Telegram alerts for specific expense types | | Weekly summaries | Use a cron node + Google Sheet query + Telegram message | | Visual dashboards | Connect the sheet to Looker Studio or Google Data Studio | Built with πŸ’¬ Telegram + 🧠 GPT-4.1 Mini + πŸ“Š Google Sheets + ⚑ n8n

Trung TranBy Trung Tran
2257

Create AI-generated books with GPT-4.1-mini, DALL-E, Google Drive and AWS S3

Multi-Agent Book Creation Workflow with AI Tool Node and GPT-4, DALL-E [](https://www.youtube.com/watch?v=o1x8Tw_7FwQ) Who’s it for This workflow is designed for: Content creators who want to generate books or structured documents automatically. Educators and trainers who need quick course materials, eBooks, or study guides. Automation enthusiasts exploring multi-agent systems using the newly released AI Tool Node in n8n. Developers looking for a reference template to understand orchestration of multiple AI agents with structured output. How it works / What it does This template demonstrates a multi-agent orchestration system powered by AI Tool Nodes: Trigger: Workflow starts when a chat message is received. Book Brief Agent: Generates the initial book concept (title, subtitle, and outline). Book Writer Agent: Expands the outline into full content by collaborating with two sub-agents: Designer Agent β†’ Provides layout/design suggestions. Content Writer Agent β†’ Drafts and refines chapters. Generate Cover Image: AI generates a custom book cover image. Upload to AWS S3: Stores the cover image securely. Configure Metadata: Adds metadata for title, author, and description. Build Book HTML: Converts markdown-based content into HTML format. Upload to Google Drive: Saves the HTML content for processing. Convert to PDF: Transforms the book into a professional PDF. Archive to Google Drive: Final version is archived for safe storage. This workflow showcases multi-agent coordination, structured parsing, and seamless integration with cloud storage services. How to set up Import the workflow into n8n. Configure the following connections: OpenAI (for Book Brief, Book Writer, Designer, and Content Writer Agents). AWS S3 (for image storage). Google Drive (for document storage & archiving). Add your API keys and credentials in n8n credentials manager. Test the workflow by sending a sample chat message (e.g., β€œWrite a book about AI in education”). Verify outputs in Google Drive (HTML + PDF) and AWS S3 (cover image). Requirements n8n (latest version with AI Tool Node support). OpenAI API key (to power multi-agent models). AWS account (with S3 bucket for storing images). Google Drive integration (for document storage and archiving). Basic familiarity with workflow setup in n8n. How to customize the workflow Switch Models: Replace gpt-4.1-mini with other models (faster, cheaper, or more powerful). Add More Agents: Introduce agents for editing, fact-checking, or translation. Change Output Format: Export to EPUB, DOCX, or Markdown instead of PDF. Branding Options: Modify the cover generation prompt to include company logos or specific style. Extend Storage: Add Dropbox, OneDrive, or Notion integration for additional archiving. Trigger Alternatives: Replace chat trigger with form submission, webhook, or schedule-based runs. βœ… This workflow acts as a free, plug-and-play template to showcase how multi-agents + AI Tool Node can work together to automate complex content creation pipelines.

Trung TranBy Trung Tran
2167

Automatically transcribe Telegram voice messages with OpenAI Whisper & Google Workspace

πŸŽ™οΈ VoiceScribe AI: Telegram Audio Message Auto Transcription with OpenAI Whisper > Automatically transcribe Telegram voice messages and store them as structured logs in Google Sheets, while backing up the audio in Google Drive. πŸ§‘β€πŸ’Ό Who’s it for Journalists, content creators, or busy professionals who often record voice memos or short interviews on the go. Anyone who wants to turn voice recordings into searchable, structured notes. βš™οΈ How it works / What it does User sends a voice message to a Telegram bot. n8n checks if the message is an audio voice note. If valid, it downloads the audio file and: Transcribes it using OpenAI Whisper (or your LLM of choice). Uploads the original audio to Google Drive for safekeeping. The transcript and audio metadata are merged. The workflow: Logs the data into a Google Sheet. Sends a formatted confirmation message to the user via Telegram. If the input is not audio, the bot politely informs the user that only voice messages are accepted. βœ… Features Accepts only Telegram voice messages. Transcribes via OpenAI Whisper. Logs DateTime, Duration, Transcript, and Audio URL to Google Sheets. Sends user feedback message via Telegram with download + transcript link. πŸš€ How to set up Prerequisites Telegram Bot connected to n8n (via Telegram Trigger) Google Drive & Google Sheets credentials configured OpenAI or Whisper API credentials (for transcription) Steps Telegram Trigger Start the flow when a new message is sent to your bot. Check Message Type Use a conditional node to confirm it's a voice message. Download Voice Message Download the .oga file from Telegram. Transcribe Audio Send the binary audio to OpenAI Whisper or your transcription service. Upload to Google Drive Backup the original audio file. Merge Outputs Combine transcription with Drive metadata. Transform to Row Format Prepare structured JSON for Google Sheets. Append to Google Sheet Store the transcript log (DateTime, Duration, Transcript, AudioURL). Send Confirmation to User Inform the user via Telegram with their transcript and download link. Unsupported Message Handler Reply to users who send non-audio messages. πŸ“„ Example Output in Google Sheet | DateTime | Duration | Transcript | AudioURL | |-----------------------|----------|--------------------------------------------|------------------------------------------------------------| | 2025-08-07T13:12:19Z | 27 | Dα»± Γ‘n Outlet Activation lΓ ... | https://drive.google.com/uc?id=xxxx&export=download | 🧠 How to customize the workflow Swap Whisper with Deepgram, AssemblyAI, or other providers. Add speaker name detection or prompt-based tagging via GPT. Route transcripts into Notion, Airtable, or CRM systems. Add multi-language support or summarization steps. πŸ“¦ Requirements | Component | Required | |---------------------|----------| | Telegram API | βœ… | | Google Drive API | βœ… | | Google Sheets API | βœ… | | OpenAI Whisper API | βœ… | | n8n Cloud or Self-hosted | βœ… | Created with ❀️ using n8n

Trung TranBy Trung Tran
1273

Build a retrieval-based chatbot with Telegram, OpenAI and Google Drive PDF backup

πŸ“š Telegram RAG Chatbot with PDF Document & Google Drive Backup An upgraded Retrieval-Augmented Generation (RAG) chatbot built in n8n that lets users ask questions via Telegram and receive accurate answers from uploaded PDFs. It embeds documents using OpenAI and backs them up to Google Drive. πŸ‘€ Who’s it for Perfect for: Knowledge workers who want instant access to private documents Support teams needing searchable SOPs and guides Educators enabling course material Q&A for students Individuals automating personal document search + cloud backup βš™οΈ How it works / What it does πŸ’¬ Telegram Chat Handling User sends a message Triggered by the Telegram bot, the workflow checks if the message is text. Text message β†’ OpenAI RAG Agent If the message is text, it's passed to a GPT-powered document agent. This agent: Retrieves relevant info from embedded documents using semantic search Returns a context-aware answer to the user Send answer back The bot sends the generated response back to the Telegram user. Non-text input fallback If the message is not text, the bot replies with a polite unsupported message. πŸ“„ PDF Upload and Embedding User uploads PDFs manually A manual trigger starts the embedding flow. Default Data Loader Reads and chunks the PDF(s) into text segments. Insert to Vector Store (Embedding) Text chunks are embedded using OpenAI and saved for retrieval. Backup to Google Drive The original PDF is uploaded to Google Drive for safekeeping. πŸ› οΈ How to set up Telegram Bot Create via BotFather Connect it to the Telegram Trigger node OpenAI Use your OpenAI API key Connect the Embeddings and Chat Model nodes (GPT-3.5/4) Ensure both embedding and querying use the same Embedding node Google Drive Set up credentials in n8n for your Google account Connect the β€œBackup to Google Drive” node PDF Ingestion Use the β€œUpload your PDF here” trigger Connect it to the loader, embedder, and backup flow βœ… Requirements Telegram bot token OpenAI API key (GPT + Embeddings) n8n instance (self-hosted or cloud) Google Drive integration PDF files to upload 🧩 How to customize the workflow | Feature | How to Customize | |-------------------------------|-------------------------------------------------------------------| | Auto-ingest from folders | Add Google Drive/Dropbox watchers for new PDFs | | Add file upload via Telegram | Extend Telegram bot to receive PDFs and run the embedding flow | | Track user questions | Log Telegram usernames and questions to a database | | Summarize documents | Add summarization step on upload | | Add Markdown or HTML support | Format replies for better Telegram rendering | Built with πŸ’¬ Telegram + πŸ“„ PDF + 🧠 OpenAI Embeddings + ☁️ Google Drive + ⚑ n8n

Trung TranBy Trung Tran
1229

Generate tailored interview questions with GPT-4 based on CV, JD, and round

πŸ€– Smart Interview Assistant: Tailored Questions Based on CV, JD, and Round Watch the demo video below: [](https://youtu.be/-KHze-bRpqQ) πŸ“Œ Who’s it for This workflow is designed for: Recruiters and Talent Acquisition Specialists who want to automate candidate interview prep. Hiring Managers conducting multiple interviews and needing personalized question sets. Technical Interviewers who want to save time and be well-prepared with relevant questions. βš™οΈ How it works / What it does The Smart Interview Assistant automates the interview preparation process in a few clicks: Accepts: Multiple resumes (PDFs) Selected job role Chosen interview round Extracts structured data from: The candidate’s CV The corresponding Job Description (JD) Uses GPT-4 to analyze: Candidate profile Role requirements Interview round context Generates: Tailored interview questions Expected answers A summarized interview prep report Sends the report directly to the hiring team via email (SMTP) πŸ“ Google Drive Structure πŸ“‚ Root Folder β”œβ”€β”€ πŸ“ jd/ Stores all job descriptions in PDF format β”‚ β”œβ”€β”€ Backend_Engineer.pdf β”‚ β”œβ”€β”€ AzureDevOpsLead.pdf β”‚ └── ... └── πŸ“„ Positions (Google Sheet) Maps Job Role ↔ JD File Link πŸ“ Sample Mapping Sheet: Positions Sheet Columns: Job Role Job Description File URL (pointing to PDF in jd/ folder) πŸ› οΈ How to Set Up Step 1: Configure API Integrations βœ… Connect your OpenAI GPT-4 API Key βœ… Enable Google Cloud APIs: Google Sheets API (to read job roles) Google Drive API (to access CV and JD files) βœ… Set up SMTP credentials (for email delivery) Step 2: Prepare Google Drive & Mapping Sheet Create a root folder on Google Drive Inside the root folder: Create a folder named /jd/ and upload all job descriptions (PDFs) Create a Google Sheet named Positions with the following format: | Job Role | Job Description File URL | |-----------------------------|--------------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Step 3: Build the Application Form Use any form tool (e.g., Typeform, Tally, or custom HTML) that collects: πŸ“Ž Resume file (PDF) 🧾 Job Role (dropdown) πŸ”„ Interview Round (dropdown) Step 4: Resume & JD Extraction πŸ” Use Extract from PDF to parse the resume content πŸ“„ Retrieve the JD link from the Positions sheet based on the selected Job Role πŸ”— Use Download file to pull the PDF for processing Step 5: Analyze with GPT-4 Run both Resume and JD through a Profile Analyzer Agent (GPT-4 with JSON output) Merge results Add manual input or mapping for the Interview Round metadata Step 6: Generate Interview Report Use a second GPT-4 agent (e.g., HR Expert Agent) to: Generate 6–8 tailored interview questions Include expected answers and rationale Step 7: Deliver Final Report Format the content as: πŸ“„ PDF (optional) πŸ“¨ Email body Send the report to the recruiter, hiring manager, or interviewer via SMTP βœ… Requirements πŸ”‘ OpenAI GPT-4 API Key πŸ“ Google Drive (for resume and JD storage) πŸ“Š Google Sheet (job role mapping) πŸ“¬ SMTP credentials (host, username, password) 🧰 n8n self-hosted or cloud instance with: PDF Parser Google Sheets node HTTP Download node Email node ✏️ How to Customize the Workflow | Part | Customization Options | |----------------------------|-------------------------------------------------------------| | Form UI | Modify the design, dropdown options, or input validations | | Job Description Source | Replace Google Sheet with Notion, Airtable, or database | | Interview Metadata | Add job level, region, or language preference | | AI Prompt Tuning | Adjust prompt phrasing or temperature in GPT nodes | | Report Format | Generate PDF instead of email body using PDF node | | Delivery Method | Add internal HR portal webhook or generate downloadable link |

Trung TranBy Trung Tran
1007

Collaborative sales planning with multi-agent AI, Google Docs, and Slack

Multi-Agent Architecture Free Bootstrap Template for Beginners [](https://www.youtube.com/watch?v=BfMY2jFJR9k) Free template to learn and reuse a multi-agent architecture in n8n. The company metaphor: a CEO (orchestrator) delegates to Marketing, Operations, Finance to produce a short sales-season plan, export it to PDF, and share it. Who’s it for Builders who want a clear, minimal pattern for multi-agent orchestration in n8n. Teams demoing/teaching agent collaboration with one coordinator + three specialists. Anyone needing a repeatable template to generate plans from multiple β€œdepartments”. How it works / What it does Trigger (Manual) β€” Click Execute workflow to start. Edit Fields β€” Provide brief inputs (company, products, dates, constraints, channels, goals). CEO Agent (Orchestrator) β€” Reads the brief, calls 3 tool agents once, merges results, resolves conflicts. Marketing Agent β€” Proposes top campaigns + channels + content calendar. Operations Agent β€” Outlines inventory/staffing readiness, fulfillment steps, risks. Finance Agent β€” Suggests pricing/discounts, budget split, targets. Compose Document β€” CEO produces Markdown; node converts to Google Doc β†’ PDF. Share β€” Upload the PDF to Slack (or Drive) for review. Outputs Markdown plan with sections (Summary, Timeline, Marketing, Ops, Pricing, Risks, Next Actions). Compact JSON for automation (campaigns, budget, dates, actions). PDF file for stakeholders. How to set up Add credentials OpenAI (or your LLM provider) for all agents. Google (Drive/Docs) to create the document and export PDF. Slack (optional) to upload/share the PDF. Map nodes (suggested) When clicking β€˜Execute workflow’ β†’ Edit Fields (form with: company, products, audience, startdate, enddate, channels, constraints, metrics). CEO Agent (AI Tool Node) β†’ calls Marketing Agent, Operations Agent, Finance Agent (AI Tool Nodes). Configure metadata (doc title from company + window). Create document file (Google Docs API) with CEO Markdown. Convert to PDF (export). Upload a file (Slack) to share. Prompts (drop-in) CEO (system): orchestrate 3 tools; request concise JSON+Markdown; merge & resolve; output sections + JSON. Marketing / Operations / Finance (system): each returns a small JSON per its scope (campaigns/calendar; staffing/steps/risks; discounts/budget/targets). Test β€” Run once; verify the PDF and Slack message. Requirements n8n (current version with AI Tool Node). LLM credentials (e.g., OpenAI). Google credentials for Docs/Drive (to create & export). Optional Slack bot token for file uploads. How to customize the workflow Swap roles: Replace departments (e.g., Product, Legal, Support) or add more tool agents. Change outputs: Export to DOCX/HTML/Notion; add a cover page; attach brand styles. Approval step: Insert Slack β€œSend & Wait” before PDF generation for review/edits. Data grounding: Add RAG (Sheets/DB/Docs) so agents cite inventory, pricing, or past campaign KPIs. Automation JSON: Extend the schema to match your CRM/PM tool and push next_actions into Jira/Asana. Scheduling: Replace manual trigger with a cron (weekly/monthly planning). Localization: Add a Translation agent or set language via input field. Guardrails: Add length limits, cost caps (max tokens), and validation on agent JSON.

Trung TranBy Trung Tran
874

AI resume screening & evaluation for HR with GPT-4 & Google Workspace

Try It Out, HireMind – AI-Driven Resume Intelligence Pipeline! This n8n template demonstrates how to automate resume screening and evaluation using AI to improve candidate processing and reduce manual HR effort. A smart and reliable resume screening pipeline for modern HR teams. This workflow combines Google Drive (JD & CV storage), OpenAI (GPT-4-based evaluation), Google Sheets (position mapping + result log), and Slack/SendGrid integrations for real-time communication. Automatically extract, evaluate, and track candidate applications with clarity and consistency. --- How it works A candidate submits their application using a form that includes name, email, CV (PDF), and a selected job role. The CV is uploaded to Google Drive for record-keeping and later reference. The Profile Analyzer Agent reads the uploaded resume, extracts structured candidate information, and transforms it into a standardized JSON format using GPT-4 and a custom output parser. The corresponding job description PDF file is automatically retrieved from a Google Sheet based on the selected job role. The HR Expert Agent evaluates the candidate profile against the job description using another GPT-4 model, generating a structured assessment that includes strengths, gaps, and an overall recommendation. The evaluation result is parsed and formatted for output. The evaluation score will be used to mark candidate as qualified or unqualified, based on that an email will be sent to applicant or the message will be send to hiring team for the next process The final evaluation result will be stored in a Google Sheet for long-term tracking and reporting. Google drive structure β”œβ”€β”€ jd Google drive folder to store your JD (pdf) β”‚ β”œβ”€β”€ Backend_Engineer.pdf β”‚ β”œβ”€β”€ AzureDevOpsLead.pdf β”‚ └── ... β”‚ β”œβ”€β”€ cv Google drive folder, where workflow upload candidate resume β”‚ β”œβ”€β”€ JohnDoeDevOps.pdf β”‚ β”œβ”€β”€ JaneSmithFullStack.pdf β”‚ └── ... β”‚ β”œβ”€β”€ Positions (Sample: https://docs.google.com/spreadsheets/d/1pW0muHp1NXwh2GiRvGVwGGRYCkcMR7z8NyS9wvSPYjs/edit?usp=sharing) πŸ“‹ Mapping Table: Job Role ↔ Job Description (Link) β”‚ └── Columns: β”‚ - Job Role β”‚ - Job Description File URL (PDF in jd/) β”‚ └── Evaluation form (Google Sheet) βœ… Final AI Evaluation Results How to use Set up credentials and integrations: Connect your OpenAI account (GPT-4 API). Enable Google Cloud APIs: Google Sheets API (for reading job roles and saving evaluation results) Google Drive API (for storing CVs and job descriptions) Set up SendGrid (to send email responses to candidates) Connect Slack (to send messages to the hiring team) Prepare your Google Drive structure: Create a root folder, then inside it create: /jd β†’ Store all job descriptions in PDF format /cv β†’ This is where candidate CVs will be uploaded automatically Create a Google Sheet named Positions with the following structure: | Job Role | Job Description Link | |------------------------------|----------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Update your application form: Use the built-in form, or connect your own (e.g., Typeform, Tally, Webflow, etc.) Ensure the Job Role dropdown matches exactly the roles in the Positions sheet Run the AI workflow: When a candidate submits the form: Their CV is uploaded to the /cv folder The job role is used to match the JD from /jd The Profile Analyzer Agent extracts candidate info from the CV The HR Expert Agent evaluates the candidate against the matched JD using GPT-4 Distribute and store results: Store the evaluation results in the Evaluation form Google Sheet Optionally notify your team: βœ‰οΈ Send an email to the candidate using SendGrid πŸ’¬ Send a Slack message to the hiring team with a summary and next steps Requirements OpenAI GPT-4 account for both Profile Analyzer and HR Expert Agents Google Drive account (for storing CVs and evaluation sheet) Google Sheets API credentials (for JD source and evaluation results) Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Hiring! πŸš€

Trung TranBy Trung Tran
847

Automate IT support with Telegram voice to JIRA tickets using Whisper & GPT-4.1 Mini

🎧 IT Voice Support Automation Bot – Telegram Voice Message to JIRA ticket with OpenAI Whisper > Automatically process IT support requests submitted via Telegram voice messages by transcribing, extracting structured data, creating a JIRA ticket, and notifying relevant parties. πŸ§‘β€πŸ’Ό Who’s it for Internal teams that handle IT support but want to streamline voice-based requests. Employees who prefer using mobile/voice to report incidents or ask for support. Organizations aiming to integrate conversational AI into existing support workflows. βš™οΈ How it works / What it does A user sends a voice message to a Telegram bot. The system checks whether it’s an audio message. If valid, the audio is: Downloaded Transcribed via OpenAI Whisper Backed up to Google Drive The transcription and file metadata are merged. The merged content is processed through an AI Agent (GPT) to extract structured request info. A JIRA ticket is created using the extracted data. The IT team is notified via Slack (or other channels). The requester receives a Telegram confirmation message with the JIRA ticket link. If the input is not audio, a polite rejection message is sent. πŸ“Œ Key Features Supports voice-based ticket creation Accurate transcription using Whisper Context-aware request parsing using GPT-4.1 mini Fully automated ticket creation in JIRA Notifies both IT and the original requester Cloud backup of original voice messages (Google Drive) πŸ› οΈ Setup Instructions Prerequisites | Component | Required | |----------|----------| | Telegram Bot & API Key | βœ… | | OpenAI Whisper / Transcription Model | βœ… | | Google Drive Credentials (OAuth2) | βœ… | | Google Sheets or other storage (optional) | ⬜ | | JIRA Cloud API Access | βœ… | | Slack Bot or Webhook | βœ… | Workflow Steps Telegram Voice Message Trigger: Starts the flow when a user sends a voice message. Is Audio Message?: If false β†’ reply "only voice is supported" Download Audio: Download .oga file from Telegram. Transcribe Audio: Use OpenAI Whisper to get text transcript. Backup to Google Drive: Upload original voice file with metadata. Merge Results: Combine transcript and metadata. Pre-process Output: Clean formatting before AI extraction. Transcript Processing Agent: GPT-based agent extracts: Requester name, department Request title & description Priority & request type Submit JIRA Request Ticket: Create ticket from AI-extracted data. Setup Slack / Email / Manual Steps: Optional internal routing or approvals. Inform Reporter via Telegram: Sends confirmation message with JIRA ticket link. πŸ”§ How to Customize Replace JIRA with Zendesk, GitHub Issues, or other ticketing tools. Change Slack to Microsoft Teams or Email. Add Notion/Airtable logging. Enhance agent to extract department from user ID or metadata. πŸ“¦ Requirements | Integration | Notes | |-------------|-------| | Telegram Bot | Used for input/output | | Google Drive | Audio backup | | OpenAI GPT + Whisper | Transcript & Extraction | | JIRA | Ticketing platform | | Slack | Team notification | Built with ❀️ using n8n

Trung TranBy Trung Tran
655

Automated trip expense claim form with OpenAI agent & Google Drive

🧾 Automated Trip Expense Claim Form With OpenAI Agent & Google Drive Watch the demo video below: [](https://www.youtube.com/watch?v=1BAU5G25ouk) > This workflow is designed for employees who need to submit expense claims for business trips. It automates the process of extracting data from receipts/invoices, logging it to a Google Sheet, and notifying the finance team via email. πŸ‘€ Who’s it for Ideal users: Employees submitting business trip expense claims HR or Admins reviewing travel-related reimbursements Finance teams responsible for processing claims βš™οΈ How it works / What it does Employee submits a form with trip information (name, department, purpose, dates) and uploads one or more receipts/invoices (PDF). Uploaded files are saved to Google Drive for record-keeping. Each PDF is passed to a DocClaim Assistant agent, which uses GPT-4o and a structured parser to extract structured invoice data. The data is transformed and formatted into a standard JSON structure. Two parallel paths are followed: Invoice records are appended to a Google Sheet for centralized tracking. A detailed HTML email summarizing the trip and expenses is generated and sent to the finance department for claim processing. πŸ›  How to set up Create a form to capture: Employee Name Department Trip Purpose From Date / To Date Receipt/Invoice File Upload (multiple PDFs) Configure file upload node to store files in a specific Google Drive folder. Set up DocClaim Agent using: GPT-4o or any LLM with document analysis capability Output parser for standardizing extracted receipt data (e.g., vendor, total, tax, date) Transform extracted data into a structured claim record (Code Node). Path 1: Save records to a Google Sheet (one row per expense). Path 2: Format the employee + claim data into a dynamic HTML email Use Send Email node to notify the finance department (e.g., finance@yourcompany.com) βœ… Requirements n8n running with access to: Google Drive API (for file uploads) Google Sheets API (for logging expenses) Email node (SMTP or Gmail for sending) GPT-4o or equivalent LLM with document parsing ability PDF invoices with clear formatting Shared Google Sheet for claim tracking Optional: Shared inbox for finance team 🧩 How to customize the workflow Add approval steps: route the email to a manager before finance Attach original PDFs: include uploaded files in the email as attachments Localize for other languages: adapt form labels, email content, or parser prompts Sync to ERP or accounting system: replace Google Sheet with QuickBooks, Xero, etc. Set limits/validation: enforce max claim per trip or required fields before submission Auto-tag expenses: add categories (e.g., travel, accommodation) for better reporting

Trung TranBy Trung Tran
624

Automate vendor contract renewals & reminders with GPT-4.1 mini, Slack & Gmail

πŸ“ Smart Vendor Contract Renewal & Reminder Workflow With GPT 4.1 mini Watch the demo video below: [](https://www.youtube.com/watch?v=zBmFGT5Xlng) Never miss a vendor renewal again! This smart workflow automatically tracks expiring contracts, reminds your finance team via Slack, and helps initiate renewal with vendors through email β€” all with built-in approval and logging. Perfect for managing both auto-renew and manual contracts. πŸ“Œ Who’s it for This workflow is designed for Finance and Procurement teams responsible for managing vendor/service contracts. It ensures timely notifications for expiring contracts and automates the initiation of renewal conversations with vendors. βš™οΈ How it works / What it does ⏰ Daily Trigger Runs every day at 6:00 AM using a scheduler. πŸ“„ Retrieve Contract List Reads vendor contract data from a Google Sheet (or any data source). Filters for contracts nearing their end date, using a Notice Period (days) field. πŸ”€ Branch Based on Renewal Type Auto-Renew Contracts: Compose a Slack message summarizing the auto-renewal. Notify the finance contact via Slack. Manual Renewal Contracts: Use an OpenAI-powered agent to generate a meaningful Slack message. Send message and wait for approval from the finance contact (e.g., within 8 hours). Upon approval, generate a formal HTML email to the vendor. Send the email to initiate the contract extension process. πŸ“Š (Optional) Logging Can be extended to log all actions (Slack messages, emails, approvals) to Google Sheets or other databases. πŸ› οΈ How to set up Prepare your Google Sheet Include the following fields: Vendor Name, Vendor Email, Service Type, Contract Start Date, Contract End Date, Notice Period (days), Renewal Type, Finance Contact, Contact Email, Slack ID, Contract Value, Notes. Sample: https://docs.google.com/spreadsheets/d/1zdDgKyL0sY54By57Yz4dNokQC_oIbVxcCKeWJ6PADBM/edit?usp=sharing Configure Integrations 🟒 Google Sheets API: To read contract data. πŸ”΅ Slack API: To notify and wait for approval. 🧠 OpenAI API (GPT-4): To generate personalized reminders. βœ‰οΈ Email (SMTP/Gmail): To send emails to vendors. Set the Daily Scheduler Use a Cron node to trigger the workflow at 6:00 AM daily. βœ… Requirements | Component | Required | |----------------------------------|----------| | Google Sheets API | βœ… | | Slack API | βœ… | | OpenAI API (GPT-4) | βœ… | | Email (SMTP/Gmail) | βœ… | | n8n (Self-hosted or Cloud) | βœ… | | Contract Sheet with proper schema| βœ… | 🧩 How to customize the workflow Adjust Reminder Period: Modify the logic in the Find Expiring Vendors node (based on Contract End Date and Notice Period). Change Message Tone or Format: Customize the OpenAI agent's prompt or switch from plain text to branded HTML email. Add Logging or Tracking: Add a node to append logs to a Google Sheet, Notion, or database. Replace Data Source: Swap out Google Sheets for Airtable, PostgreSQL, or other CRM/database systems. Adjust Wait/Approval Duration: Modify the sendAndWait Slack node timeout (e.g., from 8 hours to 2 hours). πŸ“¦ Optional Extensions 🧾 Add PDF contract preview via Drive link 🧠 Use GPT to summarize renewal terms πŸ›  Auto-create Jira task for contract review

Trung TranBy Trung Tran
591

Store AI-generated images in AWS S3: OpenAI image creation & cloud storage

Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: [](https://www.youtube.com/watch?v=el0dDJ4Ah3k) This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers who need quick, on-demand AI-generated visuals. Developers & automation builders exploring AI-driven workflows integrated with cloud storage. Educators or trainers creating tutorials or exercises on AI image generation. Businesses looking to automate image content pipelines with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click β€œExecute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., β€œFuturistic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n (latest version with AI Tool Node support). AWS account with S3 permissions to create buckets and upload files. OpenAI API key (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets: Replace the auto-create step with an existing S3 bucket. Image Variations: Generate multiple image variations per prompt by looping the image generation step. File Naming: Adjust file naming conventions (e.g., timestamp, user input). Metadata: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with Webhook, Form Submission, or Scheduler for automation. βœ… This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.

Trung TranBy Trung Tran
535