Automate restaurant reservations with AI on WhatsApp and Google Sheets
Streamline restaurant reservations on WhatsApp Overview This n8n template automates table bookings via WhatsApp, letting users request, confirm, and manage reservations without manual intervention. It leverages AI to parse messages, apply group discounts, check availability, and send natural confirmations—all within a single, reusable workflow. Key Features AI‑powered parsing & responses: Extracts guest name, date, time, and party size from free‑form WhatsApp messages and generates friendly confirmations.. Availability lookup: Integrates with Google Sheets, Airtable, or MySQL to verify slot availability in real time. Automated reminders: Optionally schedules follow‑up messages 24 hours before the booking. Modular design: Swap triggers, storage, or messaging nodes to fit your infrastructure. How It Works Trigger: Incoming WhatsApp message via WhatsApp Business Cloud API. Parse & Validate: AI Function node extracts intent and guest details. Calculate Discount: Custom Function node computes group discount. Compose Confirmation: Open Ai text model generates a personalized response. Send Message:Request node posts back to WhatsApp. Optional Reminder: Wait node + HTTP Request for pre‑booking follow‑up. Requirements WhatsApp Business Cloud API access n8n Cloud or self‑hosted instance Reservation datastore (Google Sheets, Airtable, MySQL) Open ai key for AI text generation Customization Tips Menu Attachments: Add media nodes to send PDFs or images. Alternate Slot Suggestions: Use AI to propose new times if a slot is full. Upsell Offers: Follow up with add‑on suggestions (e.g., wine pairings). Localization: Extend prompts for multilingual support.
Generate BigQuery SQL from natural language queries using GPT-4o chat
Give business users a chat box; get back valid BigQuery SQL and live query results. The workflow: Captures a plain-language question from a chat widget or internal portal. Fetches the current table + column schema from your BigQuery dataset (via INFORMATION_SCHEMA). Feeds both the schema and the question to GPT-4o so it can craft a syntactically correct SQL query using only fields that truly exist. Executes the AI-generated SQL in BigQuery and returns the results. Stores a short-term memory by session, enabling natural follow-up questions. Perfect for analysts, customer-success teams, or any stakeholder who needs data without writing SQL. --- ⚙️ Setup Instructions Import the workflow n8n → Workflows → Import from File (or Paste JSON) → Save Add credentials | Service | Where to create credentials | Node(s) to update | |---------|----------------------------|-------------------| | OpenAI | <https://platform.openai.com> → Create API key | OpenAI Chat Model | | Google BigQuery | Google Cloud Console → IAM & Admin → Service Account JSON key | Google BigQuery (schema + query) | Point the schema fetcher to your dataset In Google BigQuery1 you’ll see: sql SELECT tablename, columnname, data_type FROM n8nautomation-453001.emailleadsschema.INFORMATION_SCHEMA.COLUMNS Replace n8nautomation-453001.emailleadsschema with YOURPROJECT.YOURDATASET. Keep the rest of the query the same—BigQuery’s INFORMATIONSCHEMA always surfaces tablename, columnname, and datatype. Update the execution node Open Google BigQuery (the second BigQuery node). In Project ID select your project. The SQL Query field is already {{ $json.output.query }} so it will run whatever the AI returns. (Optional)Embed the chat interface Test end-to-end Open the embedded chat widget. Ask: “How many distinct email leads were created last week?” After a few seconds the workflow will return a table of results—or an error if the schema lacks the requested fields. As specific questions about your data Activate Toggle Active so the chat assistant is available 24/7. 🧩 Customization Ideas Row-limit safeguard: automatically append LIMIT 1000 to every query. Chart rendering: send query results to Google Sheets + Looker Studio for instant visuals. Slack bot: forward both the question and the SQL result to a Slack channel for team visibility. Schema caching: store the INFORMATION_SCHEMA result for 24 hours to cut BigQuery costs. --- Contact Email: rbreen@ynteractive.com Website: https://ynteractive.com YouTube: https://www.youtube.com/@ynteractivetraining LinkedIn: https://www.linkedin.com/in/robertbreen
Multi-modal personal AI assistant with Telegram & Google Gemini for productivity tools
Automate Your Life: The Ultimate AI Assistant in Telegram (Powered by Google Gemini) Transform your Telegram messenger into a powerful, multi-modal personal or team assistant. This n8n workflow creates an intelligent agent that can understand text, voice, images, and documents, and take action by connecting to your favorite tools like Google Calendar, Gmail, Todoist, and more. At its core, a powerful Manager Agent, driven by Google Gemini, interprets your requests, orchestrates a team of specialized sub-agents, and delivers a coherent, final response, all while maintaining a persistent memory of your conversations. Key Features 🧠 Intelligent Automation: Uses Google Gemini as a central "Manager Agent" to understand complex requests and delegate tasks to the appropriate tool. 🗣️ Multi-Modal Input: Interact naturally by sending text, voice notes, photos, or documents directly into your Telegram chat. 🔌 Integrated Toolset: Comes pre-configured with agents to manage your memory, tasks, emails, calendar, research, and project sheets. 🗂️ Persistent Memory: Leverages Airtable as a knowledge base, allowing the assistant to save and recall personal details, company information, or past conversations for context-rich interactions. ⚙️ Smart Routing: Automatically detects the type of message you send and routes it through the correct processing pipeline (e.g., voice is transcribed, images are analyzed). 🔄 Conversational Context: Utilizes a window buffer to maintain short-term memory, ensuring follow-up questions and commands are understood within the current conversation. How It Works The Telegram Trigger node acts as the entry point, receiving all incoming messages (text, voice, photo, document). A Switch node intelligently routes the message based on its type: Voice: The audio file is downloaded and transcribed into text using a voice-to-text service. Photo: The image is downloaded, converted to a base64 string, and prepared for visual analysis. Document: The file is routed to a document handler that extracts its text content for processing. Text: The message is used as-is. A Merge node gathers the processed input into a unified prompt. The Manager Agent receives this prompt. It analyzes the user's intent and orchestrates one or more specialized agents/tools: memory_base (Airtable): For saving and retrieving information from your long-term knowledge base. todoandtask_manager (Todoist): To create, assign, or check tasks. email_agent (Gmail): To compose, search, or send emails. calendar_agent (Google Calendar): To schedule events or check your agenda. research_agent (Wikipedia/Web Search): To look up information. project_management (Google Sheets): To provide updates on project trackers. After executing the required tasks, the Manager Agent formulates a final response and sends it back to you via the Telegram node. Setup Instructions Follow these steps to get your AI assistant up and running. Telegram Bot: Create a new bot using the BotFather in Telegram to get your Bot Token. In the n8n workflow, configure the Telegram Trigger node's webhook. Add your Bot Token to the credentials in all Telegram nodes. For proactive messages, replace the chatId placeholders with your personal Telegram Chat ID. Google Gemini AI: In the Google Gemini nodes, add your credentials by providing your Google Gemini API key. Airtable Knowledge Base: Set up an Airtable base to act as your assistant's long-term memory. In the memory_base nodes (Airtable nodes), configure the credentials and provide the Base ID and Table ID. Google Workspace APIs: Connect your Google account credentials for Gmail, Google Calendar, and Google Sheets. In the relevant nodes, specify the Document/Sheet IDs you want the assistant to manage. Connect Other Tools: Add your credentials for Todoist and any other integrated tool APIs. Configure Conversational Memory: This workflow is designed for multi-user support. Verify that the Session Key in the "Window Buffer Memory" nodes is correctly set to a unique user identifier from Telegram (e.g., {{ $json.chat.id }}). This ensures conversations from different users are kept separate. Review Schedule Triggers: Check any nodes designed to run on a schedule (e.g., "At a regular time"). Adjust their cron expressions, times, and timezone to fit your needs (e.g., for daily summaries). Test the Workflow: Activate the workflow. Send a text message to your bot (e.g., "Hello!"). Estimated Setup Time 30–60 minutes: If you already have your API keys, account credentials, and service IDs (like Sheet IDs) ready. 2–3 hours: For a complete, first-time setup, which includes creating API keys, setting up new spreadsheets or Airtable bases, and configuring detailed permissions.
Build document RAG system with Kimi-K2, Gemini embeddings and Qdrant
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
Find & verify business emails automatically with OpenRouter, Serper & Prospeo
Who is this template for? Growth teams, SDRs, recruiters, or anyone who routinely hunts for hard‑to‑find business emails and would rather spend time reaching out than guessing formats. What problem does this workflow solve? Manually piecing together email patterns, cross‑checking them in a verifier, and updating a tracking sheet is slow and error‑prone. This template automates the entire loop—research, guess, verify, and log—so you hit Start and watch rows fill up with ready‑to‑send addresses. What this workflow does Pull fresh leads – Grabs only the rows in your Google Sheet where Status = FALSE. Find the company pattern – Queries Serper.dev for snippets and feeds them to Gemini Flash (via OpenRouter) to spot the dominant email format. Build the address – Constructs a likely email for every first/last name. Verify in real time – Pings Prospeo by default (API) or lets you bulk‑clean in Sparkle.io. Write it back – Updates the sheet with pattern, email, confidence, verification status, and flips Status to TRUE. Loop until done – Runs batch‑by‑batch so you never hit API limits. --- 🆓 Work free‑tier magic (up to \~2,500 contacts/month) | Service | Free allowance | How this template uses it | | -------------- | ----------------------------- | ------------------------------------------------------------------------------------ | | Serper.dev | 2,500 searches/mo | Scrapes three public email snippets per domain to learn the pattern | | Sparkle.io | 10,000 bulk verifications/day | Manual upload‑download option—perfect to clean your first 2.5k emails at zero cost | | Prospeo | 75 API calls/mo | Built‑in if you prefer fully automated verification | Quick Sparkle workflow: Let the template generate emails. Export the “Email” column to CSV → upload to Sparkle.io. Download the results and paste the "verification\_status" back into the sheet (or add a small n8n import sub‑flow). --- Setup (5 minutes) Copy the Google Sheet linked in the sticky note and paste its ID into the Get Rows and Update Rows nodes. Add credentials for Google Sheets, Serper (X‑API‑KEY), OpenRouter, and optionally Prospeo. Hit Execute Workflow—that’s it. --- How to customise Prefer Sparkle for volume: Skip the Prospeo node, export emails in one click, bulk‑verify in Sparkle, and re‑import results. Swap the search source: Replace the Get Email Pattern* HTTP node with Bing, Brave, etc. Extend enrichment: Add phone look‑ups or LinkedIn scrapers before the Update Rows* node. Auto‑run: Replace the Manual Trigger with a Cron node so the sheet cleans itself every morning. --- Additional resources | Tool | Purpose | Link | | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | -------------------------------------------------------- | | Prospeo – API‑ready email verification<br><sub>Special offer: 20 % free credits for the first 3 months on any plan using this link!</sub> | Real‑time, single‑call mailbox validation | prospeo.io | | Sparkle.io – high‑volume bulk verifier (manual upload) | Free daily quota of 10 000 verifications | app.sparkle.io/sign‑up | | OpenRouter – API gateway for Gemini Flash & other LLMs | One key unlocks multiple frontier models | openrouter.ai | | Serper.dev – Google Search API | 2 500 searches/month on the free tier | serper.dev | Add the relevant keys or signup details from these links, drop them into the matching n8n credentials, and you’re all set to enrich your first 2 500 contacts at zero cost. Happy building!
Google Maps lead generation with Apify & email extraction for Airtable
🧠 What It Does This n8n workflow collects leads from Google Maps, scrapes their websites via direct HTTP requests, and extracts valid email addresses — all while mimicking real user behavior to improve scraping reliability. It rotates User-Agent headers, introduces randomized delays, and refines URLs by removing only query parameters and fragments to preserve valid page paths (like social media links). The workflow blends Apify actors, raw HTTP requests, HTML-to-Markdown conversion, and smart email extraction to deliver clean, actionable lead data — ready to be sent to Airtable, Google Sheets, or any CRM. Perfect for lean, scalable B2B lead generation using n8n’s native logic and no external scrapers. 💡Why this workflow Modest lead scrapers rely on heavy tools or APIs like Firecrawl. This workflow: Uses lightweight HTTP requests (with randomized user-agents) to scrape websites. Adds natural wait times to avoid rate limits and IP bans. Avoid full-page crawlers, yet still pulls emails effectively. Works great for freelancers, marketers, or teams targeting niche B2B leads. Designed for stealth and resilience. 👤 Who it’s for Lead generation freelancers or consultants. B2B marketers looking to extract real contact info. Small businesses doing targeted outreach. Developers who want a fast, low-footprint scraper. Anyone who wants email + website leads from Google Maps. ⚙️ How It Works 📥 Form Submission (Lead Input) A Form Trigger collects: Keyword Location No. of Leads (defaults to 10) This makes the workflow dynamic and user-friendly — ready for multiple use cases and teams. 📊 Scrape Business Info (via Apify) Apify’s Google Maps Actor searches for matching businesses. The Dataset Node fetches all relevant business details. A Set Node parses key fields like name, phone, website, and category. A Limit Node ensures the workflow only processes the desired number of leads. 🔁 First Loop – Visit & Scrape Website Each business website is processed in a loop. A Code Node cleans the website URL by removing only query parameters/fragments — keeping full paths like /contact. A HTTP Request Node fetches the raw HTML of the site: Uses randomized User-Agent headers (5 variants) to mimic real devices and browsers. This makes requests appear more human and reduces the risk of detection or blocking. HTML is converted to Markdown using the Markdown Node, making it easier to scan for text patterns. A Wait Node introduces a random delay between 2-7 seconds: Helps avoid triggering rate limits, Reduces likelihood of being flagged as a bot. A Merge Node combines scraped markdown + lead info for use in the second loop. 🔁 Second Loop – Extract Emails In this second loop, the markdown data is processed. A Code Node applies regex to extract the first valid email address. If no email is found, "N/A" is returned. A brief 1 second Wait Node simulates realistic browsing time. Another Merge Node attaches the email result to the original lead data. ✅ Filter, Clean & Store A Filter Node removes all entries with "N/A" or invalid email results. A Set Node ensures only required fields (like website, email, and company name) are passed forward. The clean leads are saved to Airtable (or optionally, Google Sheets) using an upsert-style insert to avoid duplicates. 🛡️ Anti-Flagging Design This workflow is optimized for stealth: No scraping tools or headless browsers (like Puppeteer or Firecrawl). Direct HTTP requests with rotating User-Agents. Randomized wait intervals (2-7s). Only non-intrusive parsing — no automation footprints. 🛠 How to Set It Up Open n8n (Cloud or Self-Hosted). Install Apify node search Apify and click on Install. Do this before importing your file. Import the provided .json file into your n8n editor. Set up the required credentials: 🔑 Apify API Key (used for Google Maps scraping) 🔑 Airtable API Key (or connect Google Sheets instead) Recommended Prepare your Airtable base or Google Sheet with fields like: Email, Website, Phone, Company Name. Review the Set node if you'd like to collect more fields from Apify (e.g., Ratings, Categories, etc.). 🔁 Customization Tips The Apify scraper returns rich business data. By default, this workflow collects name, phone, and website — but you can add more in the "Grab Desired Fields" node. Need safer scraping at scale? Swap the HTTP Request for Firecrawl’s Single URL scraper (or any headless service like Browserless, Oxylabs, Bright Date, or ScrapingBee) — they handle rendering and IP rotation. Want to extract from internal pages (like /contact or /about)? Use Firecrawl’s async crawl mode — just note it takes longer. For speed and efficiency, this built-in HTTP + Markdown setup is usually the fastest way to grab emails.
Analyze images with OpenAI Vision while preserving binary data for reuse
Use this template to upload an image, run a first-pass OpenAI Vision analysis, then re-attach the original file (binary/base64) to the next step using a Merge node. The pattern ensures your downstream AI Agent (or any node) can access both the original file (data) and the first analysis result (content) at the same time. --- ✅ What this template does Collects an image file via Form Trigger (binary field labeled data) Analyzes the image with OpenAI Vision (GPT-4o) using base64 input Merges the original upload and the analysis result (combine by position) so the next node has both Re-analyzes/uses the image alongside the first analysis in an AI Agent step --- 🧩 How it works (Node-by-node) Form Trigger Presents a simple upload form and emits a binary/base64 field named data. Analyze image (OpenAI Vision) Reads the same data field as base64 and runs image analysis with GPT-4o. The node outputs a text content (first-pass analysis). Merge (combine by position) Combines the two branches so the next node receives both the original upload (data) and the analysis (content) on the same item. AI Agent Receives data + content together. Prompt includes the original image (=data) and the first analysis ({{$json.content}}) to compare or refine results. OpenAI Chat Model Provides the language model for the Agent (wired as ai_languageModel). --- 🛠️ Setup Instructions (from the JSON) > Keep it simple: mirror these settings and you’re good to go. 1) Form Trigger (n8n-nodes-base.formTrigger) Path: d6f874ec-6cb3-46c7-8507-bd647c2484f0 (you can change this) Form Title: Image Document Upload Form Description: Upload a image document for AI analysis Form Fields: Label: data Type: file Output: emits a binary/base64 field named data. 2) Analyze image (@n8n/n8n-nodes-langchain.openAi) Resource: image Operation: analyze Model: gpt-4o Text: =data (use the uploaded file field) Input Type: base64 Credentials: OpenAI (use your stored OpenAI API credential) 3) Merge (n8n-nodes-base.merge) Mode: combine Combine By: combineByPosition Connect Form Trigger → Merge (input 2) Connect Analyze image → Merge (input 1) This ensures the original file (data) and the analysis (content) line up on the same item. 4) AI Agent (@n8n/n8n-nodes-langchain.agent) Prompt Type: define Text: System Message: analyze the image again and see if you get the same result. Receives: merged item containing data + content. 5) OpenAI Chat Model (@n8n/n8n-nodes-langchain.lmChatOpenAi) Model: gpt-4.1-mini Wiring: connect as ai_languageModel to the AI Agent Credentials: same OpenAI credential as above > Security Note: Store API keys in Credentials (do not hardcode keys in nodes). --- 🧠 Why “Combine by Position” fixes the binary issue Some downstream nodes lose access to the original binary once a branch processes it. By merging the original branch (with data) and the analysis branch (with content) by position, you restore a single item with both fields—so the next step can use the image again while referencing earlier analysis. --- 🧪 Test Tips Upload a JPG/PNG and execute the workflow from the Form Trigger preview. Confirm Merge output contains both data (binary/base64) and content (text). In the AI Agent, log or return both fields to verify availability. --- 🔧 Customize Swap GPT-4o for another Vision-capable model if needed. Extend the AI Agent to extract structured fields (e.g., objects detected, text, brand cues). Add a Router after Merge to branch into storage (S3, GDrive) or notifications (Slack, Email). --- 📝 Requirements n8n (cloud or self-hosted) with web UI access OpenAI credential configured (Vision support) --- 🩹 Troubleshooting Binary missing downstream? Ensure Merge receives both branches and is set to combineByPosition. Wrong field name? The Form Trigger upload field must be labeled data to match node expressions. Model errors? Verify your OpenAI credential and that the chosen model supports image analysis. --- 💬 Sticky Note (included in the workflow) > “Use Binary Field after next step” — This workflow demonstrates how to preserve and reuse an uploaded file (binary/base64) after a downstream step by using a Merge node (combineByPosition). A user uploads an image via Form Trigger → the image is analyzed with OpenAI Vision → results are merged back with the original upload so the next AI Agent step can access both the original file (data) and the first analysis (content) at the same time. --- 📬 Contact Need help customizing this (e.g., filtering by campaign, sending reports by email, or formatting your PDF)? 📧 rbreen@ynteractive.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
AI-powered news monitoring with Linkup, Airtable, and Slack notifications
This template provides a fully automated system for monitoring news on any topic you choose. It leverages Linkup's AI-powered web search to find recent, relevant articles, extracts key information like the title, date, and summary, and then neatly organizes everything in an Airtable base. Stop manually searching for updates and let this workflow deliver a curated news digest directly to your own database, complete with a Slack notification to let you know when it's done. This is the perfect solution for staying informed without the repetitive work. Who is this for? Marketing & PR professionals: Keep a close eye on industry trends, competitor mentions, and brand sentiment. Analysts & researchers: Effortlessly gather source material and data points on specific research topics. Business owners & entrepreneurs: Stay updated on market shifts, new technologies, and potential opportunities without dedicating hours to reading. Anyone with a passion project: Easily follow developments in your favorite hobby, field of study, or area of interest. What problem does this solve? Eliminates manual searching: Frees you from the daily or weekly grind of searching multiple news sites for relevant articles. Centralizes information: Consolidates all relevant news into a single, organized, and easily accessible Airtable database. Provides structured data: Instead of just a list of links, it extracts and formats key information (title, summary, URL, date) for each article, ready for review or analysis. Keeps you proactively informed: The automated Slack notification ensures you know exactly when new information is ready, closing the loop on your monitoring process. How it works Schedule: The workflow runs automatically based on a schedule you set (the default is weekly). Define topics: In the Set news parameters node, you specify the topics you want to monitor and the time frame (e.g., news from the last 7 days). AI web search: The Query Linkup for news node sends your topics to Linkup's API. Linkup's AI searches the web for relevant news articles and returns a structured list containing each article's title, URL, summary, and publication date. Store in Airtable: The workflow loops through each article found and creates a new record for it in your Airtable base. Notify on Slack: Once all the news has been stored, a final notification is sent to a Slack channel of your choice, letting you know the process is complete and how many articles were found. Setup Configure the trigger: Adjust the Schedule Trigger node to set the frequency and time you want the workflow to run. Set your topics: In the Set news parameters node, replace the example topics with your own keywords and define the news freshness that you'd like to set. Connect your accounts: Linkup: Add your Linkup API key in the Query Linkup for news node. Linkup's free plan includes €5 of credits monthly, enough for about 1,000 runs of this workflow. Airtable: In the Store one news node, select your Airtable account, then choose the Base and Table where you want to save the news. Slack: In the Notify in Slack node, select your Slack account and the channel where you want to receive notifications. Activate the workflow: Toggle the workflow to "Active", and your automated news monitoring system is live! Taking it further Change your database: Don't use Airtable? Easily swap the Airtable node for a Notion, Google Sheets, or any other database node to store your news. Customize notifications: Replace the Slack node with a Discord, Telegram, or Email node to get alerts on your preferred platform. Add AI analysis: Insert an AI node after the Linkup search to perform sentiment analysis on the news summaries, categorize articles, or generate a high-level overview before saving them.
Backup Clockify to Github based on monthly reports
Purpose This workflow creates a versioned backup of an entire Clockify workspace split up into monthly reports. How it works This backup routine runs daily by default The Clockify reports API endpoint is used to get all data from the workspace based on time entries A report file is being retrieved for every month starting with the current one, going back 3 month in total by default If changes happened during a day to any report, it is being updated in Github Prerequisites Create a private Github repository Create credentials for both Clockify and Github (make sure to give permissions for read and write operations) Setup Clone the workflow and select the belonging credentials Follow the instructions given in the yellow sticky notes Activate the workflow
Google Sheets and QuickBooks expenses automation template
Automatically Upload Expenses to QuickBooks from Google Sheets What It Does This n8n workflow template automates the process of uploading categorized expenses from Google Sheets into QuickBooks Online. It leverages your Google Sheets data to create expense entries in QuickBooks with minimal manual effort, streamlining the accounting process. Prerequisites QuickBooks Online Credential: Set up your QuickBooks Online connection in n8n for expense creation. Google Sheets Credential: Set up your Google Sheets connection in n8n to read and write data. How It Works Refresh Google Sheets Data: The workflow will first refresh the list of vendors and chart of accounts from your Google Sheets template. Import Bank Transactions: Open the provided Google Sheets template and copy-paste your transactions from your online banking CSV file. Categorize Transactions: Quickly categorize the transactions in Google Sheets, or assign this task to a team member. Run the Workflow: Once the transactions are categorized, run the workflow again, and each expense will be created automatically in QuickBooks Online. Example Use Cases Small Business Owners: Automatically track and upload monthly expenses to QuickBooks Online without manually entering data. Accountants: Automate the transfer of bank transactions to QuickBooks, streamlining the financial process. Bookkeepers: Quickly categorize and upload business expenses to QuickBooks with minimal effort. Setup Instructions Connect Your Google Sheets and QuickBooks Credentials: In n8n, connect your Google Sheets and QuickBooks accounts. Follow the credential setup instructions for both services. Setup the Google Sheets Node: Link the specific Google Sheet that contains your expense data. Make sure the sheet includes the correct columns for transactions, vendors, and accounts. Setup the QuickBooks Node: Configure the QuickBooks Online node to create expense entries in QuickBooks from the data in your Google Sheets. Setup the HTTP Node for API Calls: Use the HTTP node to make custom API calls to QuickBooks Configure the QuickBooks Realm ID: Obtain the QuickBooks Realm ID from your QuickBooks Online Developer account to use for custom API calls. This ensures the workflow targets the correct QuickBooks instance. How to Use Import Transactions: Copy and paste your bank transactions from the CSV into the provided Google Sheets template. Categorize Transactions: Manually categorize the transactions in the sheet, or delegate this task to another person to ensure they’re correctly tagged (e.g., Utilities, Office Supplies, Travel). Run the Workflow: Execute the workflow to automatically upload the categorized expenses into QuickBooks. Verify in QuickBooks: After the workflow runs, log into QuickBooks Online to confirm the expenses have been created and categorized correctly. Free Google Sheets Template To get started quickly, download my free Google Sheets template that includes pre-configured sheets for bank transactions, vendors, and chart of accounts. This template will make it easier for you to import and categorize your expenses before running the n8n workflow. Download the Free Google Sheets Template Customization Options Category Mapping: Customize how categories in Google Sheets are mapped to QuickBooks expense types. Additional API Calls: Add custom API calls if you need extra functionality, such as creating custom reports or syncing additional data. Notifications: Configure email or Slack notifications to alert you when the expenses have been successfully uploaded. Why It's Useful Time-Saving: Automatically upload and categorize expenses in QuickBooks without needing to enter them manually. Error Reduction: Minimize human error by automating the process of uploading and categorizing transactions. Efficiency: Connects Google Sheets to QuickBooks, making it easy to manage expenses in one place without having to toggle between multiple apps. Accuracy: Syncs data between Google Sheets and QuickBooks in a structured, automated way for consistent and reliable financial reporting. Flexibility: Allow external users or lower-permission employees to categorize financial transactions without providing direct access to QBO
Auto-categorize Gmail emails with AI and send prioritized Slack alerts
Who is this for? Teams using Gmail and Slack who want to streamline email handling. Customer support, sales, and operations teams that want emails sorted by topic and priority automatically. Anyone tired of manually triaging customer emails. What does it solve? Stops important messages from slipping through the cracks. Automatically identifies the nature and urgency of incoming emails. Routes emails to the right Slack channel with a clear, AI-generated summary. How it works The workflow watches for unread emails in your Gmail inbox. It fetches the full email content and passes it to OpenAI for classification. The AI returns structured JSON with the email’s category, priority, summary, and sender. Based on the AI result, it assigns a label and Slack channel. A message is sent to the right Slack channel with the details. How to setup? Connect credentials: Gmail (OAuth2) Slack (OAuth2) OpenAI (API Key) Adjust email polling: Open the Gmail Trigger node and set how frequently it should check for new emails. Verify routing settings: In the “Routing Map” node, update Slack channel IDs for each category if needed. Customize AI behavior (optional): Tweak the AI Agent prompt to better match your internal categorization rules. How to customize this workflow to your needs Add more categories: Update the AI prompt and the schema in the “Structured Output Parser.” Change Slack formatting: Modify the message text in the Slack node to include links, emojis, or mentions. Use different routing logic: Expand the Routing Map to assign based on keywords, domains, or even sentiment. Add escalation workflows: Trigger follow-up actions for high-priority or complaint emails.
Build a support ticket analytics dashboard with ScrapeGraphAI, Google Sheets & Slack alerts
Customer Support Analysis Dashboard with AI and Automated Insights 🎯 Target Audience Customer support managers and team leads Customer success teams monitoring satisfaction Product managers analyzing user feedback Business analysts measuring support metrics Operations managers optimizing support processes Quality assurance teams monitoring support quality Customer experience (CX) professionals 🚀 Problem Statement Manual analysis of customer support tickets and feedback is time-consuming and often misses critical patterns or emerging issues. This template solves the challenge of automatically collecting, analyzing, and visualizing customer support data to identify trends, improve response times, and enhance overall customer satisfaction. 🔧 How it Works This workflow automatically monitors customer support channels using AI-powered analysis, processes tickets and feedback, and provides actionable insights for improving customer support operations. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain real-time monitoring AI-Powered Ticket Analysis - Uses advanced NLP to categorize, prioritize, and analyze support tickets Multi-Channel Integration - Monitors email, chat, help desk systems, and social media Automated Insights - Generates reports on trends, response times, and satisfaction scores Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the ticket was processed | "2024-01-15T10:30:00Z" | | ticket_id | String | Unique ticket identifier | "SUP-2024-001234" | | customer_email | String | Customer contact information | "john@example.com" | | subject | String | Ticket subject line | "Login issues with new app" | | description | String | Full ticket description | "I can't log into the mobile app..." | | category | String | AI-categorized ticket type | "Technical Issue" | | priority | String | Calculated priority level | "High" | | sentiment_score | Number | Customer sentiment (-1 to 1) | -0.3 | | urgency_indicator | String | Urgency classification | "Immediate" | | response_time | Number | Time to first response (hours) | 2.5 | | resolution_time | Number | Time to resolution (hours) | 8.0 | | satisfaction_score | Number | Customer satisfaction rating | 4.2 | | agent_assigned | String | Support agent name | "Sarah Johnson" | | status | String | Current ticket status | "Resolved" | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Help desk system API access (Zendesk, Freshdesk, etc.) Email service integration (optional) Step-by-Step Configuration Install Community Nodes bash Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for customer support analysis Configure the sheet name (default: "Support Analysis") Configure Support System Integration Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for your help desk system or support portal Customize the user prompt to extract specific ticket data Set up categories and priority thresholds Set up Notification Channels Configure Slack webhook or API credentials for alerts Set up email service credentials for critical issues Define alert thresholds for different priority levels Test notification delivery Configure Schedule Trigger Set analysis frequency (hourly, daily, etc.) Choose appropriate time zones for your business hours Consider support system rate limits Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test ticket analysis with sample data 🔄 Workflow Customization Options Modify Analysis Targets Add or remove support channels (email, chat, social media) Change ticket categories and priority criteria Adjust analysis frequency based on ticket volume Extend Analysis Capabilities Add more sophisticated sentiment analysis Implement customer churn prediction models Include agent performance analytics Add automated response suggestions Customize Alert System Set different thresholds for different ticket types Create tiered alert systems (info, warning, critical) Add SLA breach notifications Include trend analysis alerts Output Customization Add data visualization and reporting features Implement support trend charts and graphs Create executive dashboards with key metrics Add customer satisfaction trend analysis 📈 Use Cases Support Ticket Management: Automatically categorize and prioritize tickets Response Time Optimization: Identify bottlenecks in support processes Customer Satisfaction Monitoring: Track and improve satisfaction scores Agent Performance Analysis: Monitor and improve agent productivity Product Issue Detection: Identify recurring problems and feature requests SLA Compliance: Ensure support teams meet service level agreements 🚨 Important Notes Respect support system API rate limits and terms of service Implement appropriate delays between requests to avoid rate limiting Regularly review and update your analysis parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and GDPR compliance for customer data 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Ticket parsing errors: Review the Code node's JavaScript logic Rate limiting: Adjust analysis frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Help desk system API documentation Customer support analytics best practices