Easy image captioning with Gemini 1.5 Pro
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/fauto,qauto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.
Stripe payment order sync – auto retrieve customer & product purchased
Overview This automation template is designed to streamline your payment processing by automatically triggering upon a successful Stripe payment. The workflow retrieves the complete payment session and filters the information to display only the customer name, customer email, and the purchased product details. This template is perfect for quickly integrating Stripe transactions into your inventory management, CRM, or notification systems. Step-by-Step Setup Instructions Stripe Account Configuration: Ensure you have an active Stripe account. Connect your Stripe Credentials. Retrieve Product and Customer Data: Utilize Stripe’s API within the automation to fetch the purchased product details. Retrieve customer information such as: email and full name. Integration and Response: Map the retrieved data to your desired format. Trigger subsequent nodes or actions such as sending a confirmation email, updating a CRM system, or logging the transaction. Pre-Conditions and Requirements Stripe Account: A valid Stripe account with access to API keys and webhook configurations. API Keys: Ensure you have your Stripe secret and publishable keys ready. Customization Guidance Data Mapping: Customize the filtering node to match your specific data schema or to include additional data fields if needed. Additional Actions: Integrate further nodes to handle post-payment actions like sending SMS notifications, updating order statuses, or generating invoices. Enjoy seamless integration and enhanced order management with this automation template!
Extract and analyze Truth Social posts for stock market impact with Airtop & Slack
Trump-o-meter: Extract and Evaluate Truth Social Posts Use Case Automatically extracting posts from Donald Trump's Truth Social account and estimating their potential impact on the U.S. stock market enables teams to monitor high-profile communications that may influence financial markets. This automation streamlines intelligence gathering for analysts, traders, and policy observers. What This Automation Does This automation retrieves up to 3 posts from Donald Trump's Truth Social profile and outputs structured information including: Author name Image URL Post text Post URL Estimated stock market impact: Direction: positive, negative, or neutral Magnitude: None, Small, Medium, Large How It Works Creates a browser session on Truth Social using an Airtop profile. Navigates to https://truthsocial.com/@realDonaldTrump. Uses a natural language prompt with a defined JSON schema to extract structured data for up to 3 posts. Splits the results into individual post items. Filters posts that contain actual content and have a non-zero estimated market impact. Sends selected posts and impact summaries to a Slack channel. Terminates the browser session to clean up. Setup Requirements Airtop API Key — free to generate. An Airtop Profile that is connected and logged into Truth Social. A Slack workspace and authorized app with write permissions to a target channel. Next Steps Integrate with Trading Signals: Link output to financial alert systems or dashboards for timely insights. Expand Monitoring: Extend to other high-impact accounts (e.g., politicians, CEOs). Enhance Analysis: Add sentiment scoring or topic classification for deeper context. Legal Disclaimer This tool is intended solely for informational and analytical purposes. The market impact estimations provided are speculative and should not be construed as financial advice. Do not make investment decisions based on this automation. Always consult with a licensed financial advisor before making any trades. Read more about Trump-o-meter automation
Gmail assistant with full Gmail history RAG using OpenAI
🧠 RAG with Full Gmail history + Real time email updates in RAG using OpenAI & Qdrant > Summary: > This workflow listens for new Gmail messages, extracts and cleans email content, generates embeddings via OpenAI, stores them in a Qdrant vector database, and then enables a Retrieval‑Augmented‑Generation (RAG) agent to answer user queries against those stored emails. It’s designed for teams or bots that need conversational access to past emails. --- 🧑🤝🧑 Who’s it for Support teams who want to surface past customer emails in chatbots or help‑desk portals Sales ops that need AI‑powered summaries and quick lookup of email histories Developers building RAG agents over email archives --- ⚙️ How it works / What it does Trigger Gmail Trigger polls every minute for new messages. Fetch & Clean Get Mail Data pulls full message metadata and body. Code node normalizes the body (removes line breaks, collapses spaces). Embed & Store Embeddings OpenAI node computes vector embeddings. Qdrant Vector Store inserts embeddings + metadata into the emails_history collection. Batch Processing SplitInBatches handles large inbox loads in chunks of 50. RAG Interaction When chat message received → RAG Agent → uses Qdrant Email Vector Store as a tool to retrieve relevant email snippets before responding. Memory Simple Memory buffer ensures the agent retains recent context. --- 🛠️ How to set up n8n Instance Deploy n8n (self‑hosted or via Coolify/Docker). Credentials Create an OAuth2 credential in n8n for Gmail (with Gmail API scopes). Add your OpenAI API key in n8n credentials. Qdrant Stand up a Qdrant instance (self‑hosted or Qdrant Cloud). Note your host, port, and API key (if any). Import Workflow In n8n, go to Workflows → Import → paste the JSON you provided. Ensure each credential reference (Gmail & OpenAI) matches your n8n credential IDs. Test Click Execute Workflow or send a test email to your Gmail. Monitor n8n logs: you should see new points in Qdrant and RAG responses. --- 📋 Requirements n8n (Self-hosted or Cloud) Gmail API enabled on a Google Cloud project OpenAI API access (with Embedding & Chat endpoints) Qdrant (hosted or cloud) with a collection named emails_history --- 🎨 How to customize the workflow Change Collection Name Update the qdrantCollection.value in all Qdrant nodes if you prefer a different collection. Adjust Polling Frequency In the Gmail Trigger node, switch from everyMinute to everyFiveMinutes or a webhook‑style trigger. Metadata Tags In Enhanced Default Data Loader, tweak the metadataValues to tag by folder, label, or sender domain. Batch Size In SplitInBatches, change batchSize to suit your inbox volume. RAG Agent Prompt Customize the systemMessage in the RAG Agent node to set the assistant’s tone, instruct on date handling, or add additional tools. Additional Tools Chain other n8n nodes (e.g., Slack, Discord) after the RAG Agent to broadcast AI answers to team channels.
Reduce LLM Costs with Semantic Caching using Redis Vector Store and HuggingFace
Stop Paying for the Same Answer Twice Your LLM is answering the same questions over and over. "What's the weather?" "How's the weather today?" "Tell me about the weather." Same answer, three API calls, triple the cost. This workflow fixes that. What Does It Do? Semantic caching with superpowers. When someone asks a question, it checks if you've answered something similar before. Not exact matches—semantic similarity. If it finds a match, boom, instant cached response. No LLM call, no cost, no waiting. First time: "What's your refund policy?" → Calls LLM, caches answer Next time: "How do refunds work?" → Instant cached response (it knows these are the same!) Result: Faster responses + way lower API bills The Flow Question comes in through the chat interface Vector search checks Redis for semantically similar past questions Smart decision: Cache hit? Return instantly. Cache miss? Ask the LLM. New answers get cached automatically for next time Conversation memory keeps context across the whole chat It's like having a really smart memo pad that understands meaning, not just exact words. Quick Start You'll need: OpenAI API key (for the chat model) huggingface API key (for embeddings) Redis 8.x (for vector magic) Get it running: Drop in your credentials Hit the chat interface Watch your API costs drop as the cache fills up That's it. No complex setup, no configuration hell. Tune It Your Way The distanceThreshold in the "Analyze results from store" node is your control knob: Lower (0.2): Strict matching, fewer false positives, more LLM calls Higher (0.5): Loose matching, more cache hits, occasional weird matches Default (0.3): Sweet spot for most use cases Play with it. Find what works for your questions. Hack It Up Some ideas to get you started: Add TTL: Make cached answers expire after a day/week/month Category filters: Different caches for different topics Confidence scores: Show users when they got a cached vs fresh answer Analytics dashboard: Track cache hit rates and cost savings Multi-language: Cache works across languages (embeddings are multilingual!) Custom embeddings: Swap OpenAI for local models or other providers Real Talk 💡 When it shines: Customer support (same questions, different words) Documentation chatbots (limited knowledge base) FAQ systems (obvious use case) Internal tools (repetitive queries) When to skip it: Real-time data queries (stock prices, weather, etc.) Highly personalized responses Questions that need fresh context every time Pro tip: Start with a higher threshold (0.4-0.5) and tighten it as you see what gets cached. Better to cache too much at first than miss obvious matches. Built with n8n, Redis, Huggingface and OpenAI. Open source, self-hosted, completely under your control.
Create high-converting sales copy with Hormozi Framework, LangChain & Google Docs
Note: This workflow assumes you already have your product’s Amazon reviews saved in a Google Sheet. If you still need those reviews, run my Amazon Reviews Scraper workflow first, then plug the resulting spreadsheet into this template. What it does Transforms any draft Google Doc into multiple high-converting sales pages. It blends Alex Hormozi’s value-stacking tactics with persona targeting based on Maslow’s Hierarchy of Needs, using your own customer reviews for proof and voice of customer (VOC). Perfect for • Growth and creative strategists • Freelance copywriters and agencies • Founders sharpening offers and funnels Apps used Google Sheets, Google Docs, LangChain OpenRouter LLM How it works Form Trigger collects Drive folder IDs, base copy URL and options. Workflow fetches the draft copy and product feature doc. It samples reviews, extracts VOC insights and maps them to Maslow needs. LLM drafts headlines and hooks following Hormozi’s \$100M Offers principles. Personas drive tone, objections and urgency in each copy variant. Loop writes one Google Doc per variant in your chosen folder. Customer analysis docs are saved to a second folder for reuse. Setup Share two Drive folders, copy the IDs (text after folders/). Paste each ID into Customer Analysis Folder ID and Advertorial Copy Folder ID. Provide File Name, Base copy (Google Docs URL) and Product Feature/USPs Doc. Optional: Reviews Sheet URL, Number of reviews to use, Target City. Set Number of Copies you need (1–20). Add Google Docs OAuth2 and Google Sheets OAuth2 credentials in n8n. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
Automatically triage & improve Todoist tasks with GPT-4.1-mini
How it works (high-level) This workflow automatically triages new tasks created in Todoist in the last 5 minutes. It improves the task description, assigns a priority (P1–P4), and sets a realistic due date based on your current workload. Main flow steps Schedule Trigger — runs at a chosen interval. Get many tasks (Todoist) — fetches all tasks created in the last 5 minutes. AI Agent (LLM) — receives the new task plus clear rules to: Rewrite the task description in an imperative style. Score and set the priority (1–4) using Impact × Urgency × Risk. Schedule a due date that respects workload and avoids overbooking. getopentasks — provides the agent with the full list of open tasks to check daily capacity. update_task — applies the improved description, chosen priority, and due date back into Todoist. Setup steps Time required: ~5-10 minutes. Configure Todoist credentials (API token) and OpenAI credentials in the respective nodes. Adjust the Schedule Trigger to how often you want the system to check for new tasks. Optionally, fine-tune the scoring and scheduling rules inside the AI Agent system prompt. ℹ️ More detailed instructions, reasoning frameworks, and constraints are already included as sticky notes inside the workflow itself.
Carbon footprint tracker with ScrapeGraphAI analysis and Google Drive ESG reports
Carbon Footprint Tracker with ScrapeGraphAI Analysis and ESG Reporting Automation 🎯 Target Audience Sustainability managers and ESG officers Environmental compliance teams Corporate social responsibility (CSR) managers Energy and facilities managers Supply chain sustainability coordinators Environmental consultants Green building certification teams Climate action plan coordinators Regulatory compliance officers Corporate reporting and disclosure teams 🚀 Problem Statement Manual carbon footprint calculation and ESG reporting is complex, time-consuming, and often inaccurate due to fragmented data sources and outdated emission factors. This template solves the challenge of automatically collecting environmental data, calculating accurate carbon footprints, identifying reduction opportunities, and generating comprehensive ESG reports using AI-powered data collection and automated sustainability workflows. 🔧 How it Works This workflow automatically collects energy and transportation data using ScrapeGraphAI, calculates comprehensive carbon footprints across all three scopes, identifies reduction opportunities, and generates automated ESG reports for sustainability compliance and reporting. Key Components Schedule Trigger - Runs automatically every day at 8:00 AM to collect environmental data Energy Data Scraper - Uses ScrapeGraphAI to extract energy consumption data and emission factors Transport Data Scraper - Collects transportation emission factors and fuel efficiency data Footprint Calculator - Calculates comprehensive carbon footprint across Scope 1, 2, and 3 emissions Reduction Opportunity Finder - Identifies cost-effective carbon reduction opportunities Sustainability Dashboard - Creates comprehensive sustainability metrics and KPIs ESG Report Generator - Automatically generates ESG compliance reports Create Reports Folder - Organizes reports in Google Drive Save Report to Drive - Stores final reports for stakeholder access 📊 Carbon Footprint Analysis Specifications The template calculates and tracks the following emission categories: | Emission Scope | Category | Data Sources | Calculation Method | Example Output | |----------------|----------|--------------|-------------------|----------------| | Scope 1 (Direct) | Natural Gas | EPA emission factors | Consumption × 11.7 lbs CO2/therm | 23,400 lbs CO2 | | Scope 1 (Direct) | Fleet Fuel | EPA fuel economy data | Miles ÷ MPG × 19.6 lbs CO2/gallon | 11,574 lbs CO2 | | Scope 2 (Electricity) | Grid Electricity | EPA emission factors | kWh × 0.92 lbs CO2/kWh | 46,000 lbs CO2 | | Scope 3 (Indirect) | Employee Commute | EPA transportation data | Miles × 0.77 lbs CO2/mile | 19,250 lbs CO2 | | Scope 3 (Indirect) | Air Travel | EPA aviation factors | Miles × 0.53 lbs CO2/mile | 26,500 lbs CO2 | | Scope 3 (Indirect) | Supply Chain | Estimated factors | Electricity × 0.1 multiplier | 4,600 lbs CO2 | 🛠️ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Drive API access for report storage Organizational energy and transportation data ESG reporting requirements and standards Step-by-Step Configuration Install Community Nodes bash Install required community nodes npm install n8n-nodes-scrapegraphai Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working Set up Schedule Trigger Configure the daily schedule (default: 8:00 AM UTC) Adjust timezone to match your business hours Set appropriate frequency for your reporting needs Configure Data Sources Update the Energy Data Scraper with your energy consumption sources Configure the Transport Data Scraper with your transportation data Set up organizational data inputs (employees, consumption, etc.) Customize emission factors for your region and industry Customize Carbon Calculations Update the Footprint Calculator with your organizational data Configure scope boundaries and calculation methodologies Set up industry-specific emission factors Adjust for renewable energy and offset programs Configure Reduction Analysis Update the Reduction Opportunity Finder with your investment criteria Set up cost-benefit analysis parameters Configure priority scoring algorithms Define implementation timelines and effort levels Set up Report Generation Configure Google Drive integration for report storage Set up ESG report templates and formats Define stakeholder access and permissions Test report generation and delivery Test and Validate Run the workflow manually with test data Verify all calculation steps complete successfully Check data accuracy and emission factor validity Validate ESG report compliance and formatting 🔄 Workflow Customization Options Modify Data Collection Add more energy data sources (renewables, waste, etc.) Include additional transportation modes (rail, shipping, etc.) Integrate with building management systems Add real-time monitoring and IoT data sources Extend Calculation Framework Add more Scope 3 categories (waste, water, etc.) Implement industry-specific calculation methodologies Include carbon offset and credit tracking Add lifecycle assessment (LCA) capabilities Customize Reduction Analysis Add more sophisticated ROI calculations Implement scenario modeling and forecasting Include regulatory compliance tracking Add stakeholder engagement metrics Output Customization Add integration with sustainability reporting platforms Implement automated stakeholder notifications Create executive dashboards and visualizations Add compliance monitoring and alert systems 📈 Use Cases ESG Compliance Reporting: Automate sustainability disclosure requirements Carbon Reduction Planning: Identify and prioritize reduction opportunities Regulatory Compliance: Meet environmental reporting mandates Stakeholder Communication: Generate transparent sustainability reports Investment Due Diligence: Provide ESG data for investors and lenders Supply Chain Sustainability: Track and report on Scope 3 emissions 🚨 Important Notes Respect data source terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update emission factors for accuracy Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Ensure compliance with local environmental reporting regulations Validate calculations against industry standards and methodologies Maintain proper documentation for audit and verification purposes 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Data source access issues: Check website accessibility and rate limits Calculation errors: Verify emission factors and organizational data Report generation failures: Check Google Drive permissions and quotas Schedule trigger failures: Check timezone and cron expression Data accuracy issues: Validate against manual calculations and industry benchmarks Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance EPA emission factor databases and methodologies GHG Protocol standards and calculation guidelines ESG reporting frameworks and compliance requirements Sustainability reporting best practices and standards
Automated Google Sheet to CSV conversion via Slack messages
Step 1: Slack Trigger The workflow starts whenever your Slack bot is mentioned or receives an event in a channel. The message that triggered it (including text and channel info) is passed into the workflow. Step 2: Extract the Sheet ID The workflow looks inside the Slack message for a Google Sheets link. If it finds one, it extracts the unique spreadsheet ID from that link. It also keeps track of the Slack channel where the message came from. If no link is found, the workflow stops quietly. Step 3: Read Data from Google Sheet Using the sheet ID, the workflow connects to Google Sheets and reads the data from the chosen tab (the specific sheet inside the spreadsheet). This gives the workflow all the rows and columns of data from that tab. Step 4: Convert Data to CSV The rows pulled from Google Sheets are then converted into a CSV file. At this point, the workflow has the spreadsheet data neatly packaged as a file. Step 5: Upload CSV to Slack Finally, the workflow uploads the CSV file back into Slack. It can either be sent to a fixed channel or directly to the same channel where the request came from. Slack users in that channel will see the CSV as a file upload. ============================================ How it works The workflow is triggered when your Slack bot is mentioned or receives a message. It scans the message for a Google Sheets link. If a valid link is found, the workflow extracts the unique sheet ID. It then connects to Google Sheets, reads the data from the specified tab, and converts it into a CSV file. Finally, the CSV file is uploaded back into Slack so the requesting user (and others in the channel) can download it. How to use In Slack, mention your bot and include a Google Sheets link in your message. The workflow will automatically pick up the link and process it. Within a short time, the workflow will upload a CSV file back into the same Slack channel. You can then download or share the CSV file directly from Slack. Requirements Slack App & Credentials: Your bot must be installed in Slack with permissions to receive mentions and upload files. Google Sheets Access: The Google account connected in n8n must have at least read access to the sheet. n8n Setup: The workflow must be imported into n8n and connected to your Slack and Google Sheets credentials. Correct Sheet Tab: The workflow needs to know which tab of the spreadsheet to read (set by name or by sheet ID). Customising this workflow Channel Targeting: By default, the file can be sent back to the channel where the request came from. You can also set it to always post in a fixed channel. File Naming: Change the uploaded file name (e.g., include the sheet title or today’s date). Sheet Selection: Adjust the configuration to read a specific tab or allow the user to specify the tab in their Slack message. Error Handling: Add extra steps to send a Slack message if no valid link is detected, or if the Google Sheet cannot be accessed. Formatting: Extend the workflow to clean, filter, or enrich the data before converting it into CSV.
Send daily mortgage rate updates from Mortgage News Daily to messaging platforms
AI-Powered Mortgage Rate Updates with Client Messaging Keep your clients informed without the repetitive work. This workflow automatically pulls the latest mortgage rates, cleans the data, and uses AI to craft polished messages you can send directly to clients. Whether you want professional emails, quick SMS-style updates, or even CRM-ready messages, this setup saves time while making you look on top of the market. How it Works Daily Trigger – Runs on a schedule you choose (default: multiple times per day). Fetch Rates – Pulls the latest mortgage rates from Mortgage News Daily (you can swap to another source). Clean Data – Prepares and formats the raw rate data for messaging. AI Messaging – Uses Google AI Studio (Gemini) to generate text/email content that’s clear, professional, and client-ready. You can customize the prompt to adjust tone or style. Include variables (like client names or CRM fields) for personalized outreach. Send Updates – Delivers the AI-crafted message to Discord by default for you to copy and send to your clients or upload yto your bulk iMessage or email tool, but can be adapted for: Slack, Telegram, WhatsApp, or Gmail Why Use This Save hours - No more copy-pasting rates into client messages. Look prepared - Clients see you as proactive, not reactive. Customizable - Use AI prompts to match your personal voice, include client-specific details, or change the delivery channel. Scalable – Works for one agent or an entire brokerage team. With this workflow, by the time your client asks “what are rates today?”, they’ll already have a polished update waiting in their inbox or chat. 🚀
Validate and enrich JotForm leads with Reoon email verification and Apollo in sheets
Description This workflow automatically validates and enriches contact form submissions from JotForm, ensuring you only store high-quality leads with complete business information. Who's it for Marketing teams, sales professionals, and business owners who collect leads through forms and want to automatically verify email validity and enrich contact data before adding them to their CRM or database. What it does When someone submits a contact form on JotForm, this workflow: Captures the submission data (name, email, phone, message) Creates a new record in Google Sheets Verifies the email address using Reoon's email verification API Saves validation metrics (deliverability, spam trap detection, disposable email check) Filters out unsafe or invalid emails Enriches validated contacts with professional data from Apollo (LinkedIn URL, job title, company name) Updates the Google Sheet with enriched information How it works JotForm Trigger - Listens for new form submissions Initial Storage - Creates a contact record in Google Sheets with basic form data Email Verification - Sends email to Reoon API for comprehensive validation Save Verification Results - Updates the sheet with email quality metrics Safety Filter - Only passes emails marked as "safe" to enrichment Contact Enrichment - Queries Apollo API to find professional information Final Update - Saves enriched data (LinkedIn, title, company) back to the sheet Requirements Services you'll need: JotForm account (free plan available) Reoon Email Verifier API access Apollo.io account for contact enrichment Google account for Google Sheets access Setup steps: Copy this Google Sheet template (make your own copy) Create a JotForm with fields: First Name, Last Name, E-mail, Phone, Message Get your Reoon API key from their dashboard Get your Apollo.io API key from account settings Connect your Google Sheets account in n8n How to customize Change verification level: Modify the "mode" parameter in the Reoon API call (options: quick, power) Adjust filtering criteria: Update the IF node to filter by different email quality metrics Add more enrichment: Apollo returns additional data fields you can map to your sheet Notification layer: Add a Send Email or Slack node after enrichment to notify your team of high-quality leads CRM integration: Replace or supplement Google Sheets with HubSpot, Salesforce, or Pipedrive nodes --- This workflow provides a complete lead qualification pipeline that saves time and ensures only high-quality, validated contacts make it into your database with enriched professional information.
Curate & translate news from RSS using Google Gemini, Sheets, and Slack
🤖 Automated Multi-lingual News Curator & Archiver Overview This workflow automates news monitoring by fetching RSS feeds, rewriting content using AI, translating it (EN/ZH/KO), and archiving it. Who is this for? Content Curators, Localization Teams, and Travel Bloggers. How it works Fetch & Filter: Pulls NHK RSS and filters for keywords (e.g., "Tokyo"). AI Processing: Google Gemini rewrites articles, extracts locations, and translates text. Archive & Notify: Saves structured data to Google Sheets and alerts Slack. Setup Requirements Credentials: Google Gemini, Google Sheets, Slack. Google Sheet: Create headers: title, summary, location, en, zh, ko, url. Slack: Configure Channel IDs. Customization RSS Read: Change feed URL. If Node: Update filter keywords. AI Agent: Adjust system prompts for tone. Fetch & Filter Runs on a schedule to fetch the latest RSS items. Filters articles based on specific keywords (e.g., "Tokyo" or "Season") before processing. AI Analysis & Parsing Uses Google Gemini to rewrite the news, extract specific locations, and translate content. The Code node cleans the JSON output for the database. Archive & Notify Appends the structured data to Google Sheets and sends a formatted notification to Slack (or alerts if an article was skipped). Output Example (JSON) The translation agent outputs data in this format: json { "en": "Tokyo Tower is...", "zh": "东京塔是...", "ko": "도쿄 타워는..." }