Simple expense tracker with n8n chat, AI agent and Google Sheets
Use Case It is very convenient to add expenses via simple chat message. This workflow attempts to do exactly this using AI-powered n8n magic! Send message to a chat, something like "car wash; 59.3 usd; 25 jan 2024" And get a response: Your expense saved, here is the output of save sub-workflow:{"cost":59.3,"descr":"car wash","date":"2024-01-25","msg":"car wash; 59.3 usd; 25 jan 2024"} LLM will smartly parse your message to structured JSON and save the expense as a new row into Google Sheet! Installation Set up Google Sheets: Clone this Sheet: https://docs.google.com/spreadsheets/d/1D0r3tun7LF7Ypb21CmbTKEtn76WE-kaHvBCM5NdgiPU/edit?gid=0gid=0 (File -> Make a copy) Choose this sheet into "Save expense into Google Sheets" node. Fix sub-workflow dropdown: open "Parse msg and save to Sheets" node (which is an n8n sub-workflow executor tool) and make sure the SAME workflow is chosen in the dropdown. it will allow n8n to locate and call "Workflow Input Trigger" properly when needed. Activate the workflow to make chat work properly. Sent message to chat, something like "car wash; 59.3 usd; 25 jan 2024" you should get a response: Your expense saved, here is the output of save sub-workflow:{"cost":59.3,"descr":"car wash","date":"2024-01-25","msg":"car wash; 59.3 usd; 25 jan 2024"} and new row in Google sheets should be inserted!
Send daily weather forecasts from OpenWeatherMap to Telegram with smart formatting
🌤️ Daily Weather Forecast Bot A comprehensive n8n workflow that fetches detailed weather forecasts from OpenWeatherMap and sends beautifully formatted daily summaries to Telegram. 📋 Features 📊 Daily Overview: Complete temperature range, rainfall totals, and wind conditions ⏰ Hourly Forecast: Weather predictions at key times (9AM, 12PM, 3PM, 6PM, 9PM) 🌡️ Smart Emojis: Context-aware weather icons and temperature indicators 💡 Smart Recommendations: Contextual advice (umbrella alerts, clothing suggestions, sun protection) 🌪️ Enhanced Details: Feels-like temperature, humidity levels, wind speed, UV warnings 📱 Rich Formatting: HTML-formatted messages with emojis for excellent readability 🕐 Timezone-Aware: Proper handling of Luxembourg timezone (CET/CEST) 🛠️ What This Workflow Does Triggers daily at 7:50 AM to send morning weather updates Fetches 5-day forecast from OpenWeatherMap API with 3-hour intervals Processes and analyzes weather data with smart algorithms Formats comprehensive report with HTML styling and emojis Sends to Telegram with professional formatting and actionable insights ⚙️ Setup Instructions OpenWeatherMap API Sign up at OpenWeatherMap Get your free API key (1000 calls/day included) Replace API_KEY in the HTTP Request node URL Telegram Bot Message @BotFather on Telegram Send /newbot command and follow instructions Copy the bot token to n8n credentials Get your chat ID by messaging the bot, then visiting: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Update the chatId parameter in the Telegram node Location Configuration Default location: Strassen, Luxembourg To change: modify q=Strassen in the HTTP Request URL Format: q=CityName,CountryCode (e.g., q=Paris,FR) 🎯 Technical Details API Source: OpenWeatherMap 5-day forecast Schedule: Daily at 7:50 AM (configurable) Format: HTML with rich emoji formatting Error Handling: 3 retry attempts with 5-second delays Rate Limits: Uses only 1 API call per day Timezone: Europe/Luxembourg (handles CET/CEST automatically) 📊 Weather Data Analyzed Temperature ranges and "feels like" temperatures Precipitation forecasts and accumulation Wind speed and conditions Humidity levels and comfort indicators Cloud coverage and visibility UV index recommendations Time-specific weather patterns 💡 Smart Features Conditional Recommendations: Only shows relevant advice Night/Day Awareness: Different emojis for time of day Temperature Context: Color-coded temperature indicators Weather Severity: Appropriate icons for weather intensity Humidity Comfort: Comfort level indicators Wind Analysis: Descriptive wind condition text 🔧 Customization Options Schedule: Modify trigger time in the Schedule node Location: Change city in HTTP Request URL Forecast Hours: Adjust desiredHours array in the code Temperature Thresholds: Modify emoji temperature ranges Recommendation Logic: Customize advice triggers 📱 Sample Output 🌤️ Weather Forecast for Strassen, LU 📅 Monday, 2 June 2025 📊 Daily Overview 🌡️ Range: 12°C - 22°C 💧 Comfortable (65%) ⏰ Hourly Forecast 🕒 09:00 ☀️ 15°C 🕒 12:00 🌤️ 20°C 🕒 15:00 ☀️ 22°C (feels 24°C) 🕒 18:00 ⛅ 19°C 🕒 21:00 🌙 16°C 📡 Data from OpenWeatherMap | Updated: 07:50 CET 🚀 Getting Started Import this workflow to your n8n instance Add your OpenWeatherMap API key Set up Telegram bot credentials Test manually first Activate for daily automated runs 📋 Requirements n8n instance (cloud or self-hosted) Free OpenWeatherMap API account Telegram bot token Basic understanding of n8n workflows --- Perfect for: Daily weather updates, team notifications, personal weather tracking, smart home automation triggers.
Archive Spotify's discover weekly playlist
This workflow will archive your Spotify Discover Weekly playlist to an archive playlist named "Discover Weekly Archive" which you must create yourself. If you want to change the name of the archive playlist, you can edit value2 in the "Find Archive Playlist" node. It is configured to run at 8am on Mondays, a conservative value in case you forgot to set your GENERIC_TIMEZONE environment variable (see the docs here). Special thanks to erin2722 for creating the Spotify node and harshil1712 for help with the workflow logic. To use this workflow, you'll need to: Create then select your credentials in each Spotify node Create the archive playlist yourself Optionally, you may choose to: Edit the archive playlist name in the "Find Archive Playlist" node Adjust the Cron node with an earlier time if you know GENERIC_TIMEZONE is set Setup an error workflow like this one to be notified if anything goes wrong
Beginner data analysis: merge, filter & summarize in Google Sheets with GPT-4o
This beginner-friendly n8n workflow teaches essential data manipulation techniques using Google Sheets and AI. You'll learn how to: ✅ Merge two datasets by a shared column (Channel) 🔍 Filter rows based on performance metrics (Clicks, Spend) 🔀 Branch logic into "Great" vs. "Poor" outcomes 📊 Summarize results by team leader 🤖 Use an OpenAI-powered agent to generate a written analysis highlighting the best and worst performers Perfect for marketers, analysts, or anyone learning how to clean, transform, and interpret data inside n8n. Includes: 📁 Sample Google Sheet to copy 🛠 Setup instructions for Google Sheets & OpenAI ✨ AI summary powered by GPT-4o-mini 👋 Questions or Feedback? Feel free to reach out — I’m happy to help! Robert Breen Founder, Ynteractive 🌐 ynteractive.com 📧 robert@ynteractive.com 📺 YouTube: YnteractiveTraining 🔗 LinkedIn: linkedin.com/in/robertbreen
Create consistent AI characters with Google Nano Banana & upscaling via Kie.ai
Google NanoBanana Model Image Editor for Consistent AI Influencer Creation with Kie AI Image Generation & Enhancement Workflow This n8n template demonstrates how to use Kie.ai's powerful image generation models to create and enhance images using AI, with automated story creation, image upscaling, and organized file management through Google Drive and Sheets. Use cases include: AI-powered content creation for social media, automated story visualization with consistent characters, marketing material generation, and high-quality image enhancement workflows. Good to know The workflow uses Kie.ai's google/nano-banana-edit model for image generation and nano-banana-upscale for 4x image enhancement Images are automatically organized in Google Drive with timestamped folders Progress is tracked in Google Sheets with status updates throughout the process The workflow includes face enhancement during upscaling for better portrait results All generated content is automatically saved and organized for easy access How it works Project Setup: Creates a timestamped folder structure in Google Drive and initializes a Google Sheet for tracking Story Generation: Uses OpenAI GPT-4 to create detailed prompts for image generation based on predefined templates Image Creation: Sends the AI-generated prompt along with 5 reference images to Kie.ai's nano-banana-edit model Status Monitoring: Polls the Kie.ai API to monitor task completion with automatic retry logic Image Enhancement: Upscales the generated image 4x using nano-banana-upscale with face enhancement File Management: Downloads, uploads, and organizes all generated content in the appropriate Google Drive folders Progress Tracking: Updates Google Sheets with status information and image URLs throughout the entire process Key Features Automated Story Creation: AI-powered prompt generation for consistent, cinematic image creation Multi-Stage Processing: Image generation followed by intelligent upscaling Smart Organization: Automatic folder creation with timestamps and file management Progress Tracking: Real-time status updates in Google Sheets Error Handling: Built-in retry logic and failure state management Face Enhancement: Specialized enhancement for portrait images during upscaling How to use Manual Trigger: The workflow starts with a manual trigger (easily replaceable with webhooks, forms, or scheduled triggers) Automatic Processing: Once triggered, the entire pipeline runs automatically Monitor Progress: Check the Google Sheet for real-time status updates Access Results: Find your generated and enhanced images in the organized Google Drive folders Requirements Kie.ai Account: For AI image generation and upscaling services OpenAI API: For intelligent prompt generation (GPT-4 mini) Google Drive: For file storage and organization Google Sheets: For progress tracking and status monitoring Customizing this workflow This workflow is highly adaptable for various use cases: Content Creation: Modify prompts for different styles (fashion, product photography, architectural visualization) Batch Processing: Add loops to process multiple prompts or reference images Social Media: Integrate with social platforms for automatic posting E-commerce: Adapt for product visualization and marketing materials Storytelling: Create sequential images for visual narratives or storyboards The modular design makes it easy to add additional processing steps, change AI models, or integrate with other services as needed. Workflow Components Folder Management: Dynamic folder creation with timestamp naming AI Integration: OpenAI for prompts, Kie.ai for image processing File Processing: Binary handling, URL management, and format conversion Status Tracking: Multi-stage progress monitoring with Google Sheets Error Handling: Comprehensive retry and failure management systems
Personalized tour package recommendations with GPT-4o, Pinecone & Lovable UI
Personalized Tour Package Recommendations via n8n + Pinecone + Lovable UI I've created an intelligent Travel Itinerary Planner that connects a Lovable front-end UI with a smart backend powered by n8n, Pinecone, and OpenAI to deliver personalized tour packages based on natural language queries. What It Does Users type in their travel destination and duration (e.g., "Paris 5 days trip" or "Bali Trip for 7 Days, would love water sports, adventures and trekking included, also some historical monuments") through a Lovable UI. This triggers a webhook in n8n, which processes the request, searches vectorized tour data in Pinecone, and generates a personalized itinerary using OpenAI’s GPT. The results are then structured and sent back to the frontend UI for display in an interactive, reorderable format. Workflow Architecture Lovable UI ➝ Webhook ➝ Tour Recommendation Agent ➝ Vector Search ➝ OpenAI Response ➝ Structured Output ➝ Response to Lovable Tools & Components Used Webhook Acts as the entry point between the Lovable frontend and n8n. Captures the user query (destination, duration) and forwards it into the workflow. OpenAI Chat Model To interpret the user query. To generate a user-friendly, structured tour package from the matched results. Simple Memory Keeps chat state and context for follow-up queries (extendable for future features like multi-step planning or saved itineraries). Question Answering with Vector Store Searches vector embeddings of pre-loaded tour data. Finds the most relevant tour packages by comparing query embeddings. Pinecone Vector Store Stores tour packages and activity data in vectorized format. Enables fast and scalable semantic search across destinations, themes (e.g., "adventure", "cultural"), and duration. OpenAI Embeddings Embeds all tour and activity documents stored in Pinecone. Converts input user queries into embedding vectors for semantic search. Structured Output Parser Parses the final OpenAI-generated response into a consistent, frontend-consumable JSON format. Frontend (Lovable UI) User types in destination or their travel package needs in the Tour Search. Lovable queries the n8n workflow. Displays beautifully structured, editable itineraries. How to Set It Up Webhook Setup in n8n Create a POST webhook node. Set Webhook URL and connect it with Lovable frontend. Pinecone & Embeddings Convert your static tour package documents (PDFs, JSON, CSV, etc.) into embeddings using OpenAI. Store the embeddings in a Pinecone namespace (e.g., kuala-lumpur-3-days). Configure “Answer with Vector Store” Tool Connect the tool to your Pinecone instance and pass query embedding for matching. Connect to OpenAI Chat Use the GPT model to process query + context from Pinecone to generate an engaging itinerary description. Optionally chain a second model to format it into UI-consumable output. Output Parser & Return Use Structured Output Parser to parse the response and pass it to Respond to Webhook node for UI display. Ideal Use Cases Smart itinerary planning for OTAs or DMCs Personalized travel recommendations in chatbots or apps Travel advisors and agents automating package generation Benefits Highly relevant, contextual travel suggestions Natural query understanding via OpenAI Seamless frontend-backend integration via Webhook If you’re building personalized experiences for travelers using AI, give this approach a try! Let me know if you’d like the JSON for this workflow or help setting up the Pinecone data pipeline.
Baserow campaign database to Shopify with image upload & dynamic template update
Automating your marketing campaign management process can streamline your workflow and save you valuable time. With the combination of Baserow and n8n, you can efficiently handle your campaign data and seamlessly publish content to your Shopify store. In this workflow template, I demonstrate how to leverage Baserow as a centralized platform for organizing your marketing campaign assets, including copy and images. By utilizing n8n, we automate the process of fetching images and campaign descriptions from Baserow and uploading them directly to your Shopify store. With this automated solution, you can expedite the publishing process, ensuring that your campaigns are launched swiftly across your sales channels. Additionally, this workflow serves as a foundational step towards further automation in campaign management, allowing you to dynamically generate and upload content to your Shopify store with ease. This template will help you: Use n8n to get images for marketing campaigns from Baserow and upload them to your Shopify media library Dynamically inject data from Baserow into a template file Upload a template file to your Shopify theme This template will demonstrate the follwing concepts in n8n: use the Webhook node use the IF node to control the execution flow of the workflow do time calculation using expressions and javascript use the GraphQL node to upload images to your Shopify media files create a dynamic template file for your Shopify theme use the HTTP Reqest node to upload your template file to your Shopify store How to get started? Create a custom app in Shopify get the credentials needed to connect n8n to Shopify This is needed for the Shopify Trigger Create Shopify Acces Token API credentials n n8n for the Shopify trigger node Create Header Auth credentials: Use X-Shopify-Access-Token as the name and the Acces-Token from the Shopify App you created as the value. The Header Auth is neccessary for the GraphQL nodes. You will need a running Baserow instance for this. You can also sign up for a free account at https://baserow.io/ Please make sure to read the notes in the template. For a detailed explanation please check the corresponding video: https://youtu.be/Ky-dYlljGiY
Evaluations metric: answer similarity
This n8n template demonstrates how to calculate the evaluation metric "Similarity" which in this scenario, measures the consistency of the agent. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/answersimilarity.py How it works This evaluation works best where questions are close-ended or about facts where the answer can have little to no deviation. For our scoring, we generate embeddings for both the AI's response and ground truth and calculate the cosine similarity between them. A high score indicates LLM consistency with expected results whereas a low score could signal model hallucination. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
Build a RAG knowledge chatbot with OpenAI, Google Drive, and Supabase
🚀 Build Your Own Knowledge Chatbot Using Google Drive Create a smart chatbot that answers questions using your Google Drive PDFs—perfect for support, internal docs, education, or research. 🛠️ Quick Setup Guide Step 1: Prerequisites n8n instance (cloud or self-hosted) Google Drive account (with PDFs) Supabase account (vector database) OpenAI API key PostgreSQL database (for chat memory) else remove the node Step 2: Supabase Setup Create supabase account (its free) Create a project Copy the sql and paste it in supabase sql editor SQL -- Enable the pgvector extension to work with embedding vectors create extension vector; -- Create a table to store your documents create table documents ( id bigserial primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed ); -- Create a function to search for documents create function match_documents ( query_embedding vector(1536), match_count int default null, filter jsonb DEFAULT '{}' ) returns table ( id bigint, content text, metadata jsonb, similarity float ) language plpgsql as $$ variableconflict usecolumn begin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding limit match_count; end; $$; Step 3: Import & Configure n8n Workflow Import this template into n8n Add credentials: OpenAI API key Google Drive OAuth2 Supabase URL & service key PostgreSQL connection Set your Google Drive folder ID in triggers Step 4: Test & Use Add a PDF to your Drive folder → check Supabase for new entries Start the workflow and chat → ask questions about your documents. "What can you help me with?" Multi-turn chat → context is maintained per user ⚡ Features Auto-syncs new/updated PDFs from Google Drive Extracts, chunks, and vectorizes text Finds relevant info and answers questions Maintains chat history per user 📝 Troubleshooting Check folder permissions & IDs if no docs found Verify API keys & Supabase setup for errors Ensure PostgreSQL is connected for chat memory Tags: RAG, Chatbot, Google Drive, Supabase, OpenAI, n8n Setup Time: ~20 minutes
Process multiple prompts in parallel with Azure OpenAI Batch API
Process Multiple Prompts in Parallel with Azure OpenAI Batch API Who is this for? This workflow is designed for developers and data scientists who want to efficiently send multiple prompts to the Azure OpenAI Batch API and retrieve responses in a single batch process. It is particularly useful for applications that require processing large volumes of text data, such as chatbots, content generation, or data analysis. What problem is this workflow solving? Sending multiple prompts to the Azure OpenAI API can be time-consuming and inefficient if done sequentially. This workflow automates the process of batching requests, allowing users to submit multiple prompts at once and retrieve the results in a streamlined manner. This not only saves time but also optimizes resource usage. What this workflow does This workflow: Accepts an array of requests, each containing a prompt and associated parameters. Converts the requests into a JSONL format suitable for batch processing. Uploads the batch file to the Azure OpenAI API. Creates a batch job to process the prompts. Polls for the job status and retrieves the output once processing is complete. Parses the output and returns the results. Key Features of Azure OpenAI Batch API The Azure OpenAI Batch API is designed to handle large-scale and high-volume processing tasks efficiently. Key features include: Asynchronous Processing: Handle groups of requests with separate quotas, targeting a 24-hour turnaround at 50% less cost than global standards. Batch Requests: Send a large number of requests in a single file, avoiding disruption to online workloads. Key Use Cases Large-Scale Data Processing: Quickly analyze extensive datasets in parallel. Content Generation: Create large volumes of text, such as product descriptions or articles. Document Review and Summarization: Automate the review and summarization of lengthy documents. Customer Support Automation: Handle numerous queries simultaneously for faster responses. Data Extraction and Analysis: Extract and analyze information from vast amounts of unstructured data. Natural Language Processing (NLP) Tasks: Perform tasks like sentiment analysis or translation on large datasets. Marketing and Personalization: Generate personalized content and recommendations at scale. Setup Azure OpenAI Credentials: Ensure you have your Azure OpenAI API credentials set up in n8n. Configure the Workflow: Set the azopenaiendpoint in the "Setup defaults" node to your Azure OpenAI endpoint. Adjust the api-version in the "Set desired 'api-version'" node if necessary. Run the Workflow: Trigger the workflow using the "Run example" node to see it in action. How to customize this workflow to your needs Modify Prompts: Change the prompts in the "One query example" node to suit your application. Adjust Parameters: Update the parameters in the requests to customize the behavior of the OpenAI model. Add More Requests: You can add more requests in the input array to process additional prompts. Example Input json [ { "api-version": "2025-03-01-preview", "requests": [ { "custom_id": "first-prompt-in-my-batch", "params": { "messages": [ { "content": "Hey ChatGPT, tell me a short fun fact about cats!", "role": "user" } ] } }, { "custom_id": "second-prompt-in-my-batch", "params": { "messages": [ { "content": "Hey ChatGPT, tell me a short fun fact about bees!", "role": "user" } ] } } ] } ] Example Output json [ { "custom_id": "first-prompt-in-my-batch", "response": { "body": { "choices": [ { "message": { "content": "Did you know that cats can make over 100 different sounds?" } } ] } } }, { "custom_id": "second-prompt-in-my-batch", "response": { "body": { "choices": [ { "message": { "content": "Bees communicate through a unique dance called the 'waggle dance'." } } ] } } } ] Additional Notes Job Management: You can cancel a job at any time, and any remaining work will be canceled while already completed work is returned. You will be charged for any completed work. Data Residency: Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. Exponential Backoff: If your batch jobs are large and hitting the enqueued token limit, certain regions support queuing multiple batch jobs with exponential backoff. This template provides a comprehensive solution for efficiently processing multiple prompts using the Azure OpenAI Batch API, making it a valuable tool for developers and data scientists alike.
Create ideal customer profile from websites content to Google Doc
Who’s it for Growth, marketing, sales, and founder teams that want a decision-ready Ideal Customer Profile (ICP)—grounded in their own site content. How it works / What it does On form submission collects Website URL and Business Name and redirects to Google Drive Folder after the final node. Crawl and Scrape the Website Content - crawls and scrape 20 pages from the website. ICP Creator builds a Markdown ICP with: A) Executive Summary B) One-Pager ICP C) Tiering & Lead Scoring D) Demand Gen & ABM Plays E) Evidence Log F) Section Confidence Facts vs. Inferences, confidence scores and tables. Markdown to Google Doc converts Markdown to Google Docs batchUpdate requests. Then this is used in Update a document for updating the empty doc. Create a document + Update a document generate “ICP for <Business Name>” in your Drive folder and apply formatting. How to set up 1) Add credentials: Firecrawl (Authorization header), OpenAI (Chat), Google Docs OAuth2. 2) Replace placeholders: {{APIKEY}}, {{googledrivefolderid}}, {{googledrivefolder_url}}. 3) Publish and open the Form URL to test. Requirements Firecrawl API key • OpenAI API key • Google account with access to the target Drive folder. Resources Google OAuth2 Credentials Setup - https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ OpenAI API key - https://docs.n8n.io/integrations/builtin/credentials/openai/ Firecrawl API key - https://take.ms/lGcUp
Scan URLs with urlscan.io and send results via Gmail
Overview Receive a URL via Webhook, submit it to urlscan.io, wait ~30 seconds for artifacts (e.g., screenshot), then email a clean summary with links to the result page, screenshot, and API JSON. What this template does Ingests a URL from a POST request. Submits the URL to urlscan.io and captures the scan UUID. Waits 30s to give urlscan time to generate the screenshot and result artifacts. Sends a formatted HTML email via Gmail with all relevant links. Nodes used Webhook (POST /urlscan) urlscan.io → Perform a scan Wait (30 seconds; configurable) Gmail → Send a message Input json { "url": "https://example.com" }