Import JSON data into Google Sheets and CSV file
This workflow gets data from an API and exports it into Google Sheets and a CSV file.
Send a random recipe once a day to Telegram
This telegram bot is designed to send one random recipe a day. This specific bot has filtered out only vegan recipes, so you can choose your diet type and send only recipes for a specific diet. What credentials you need: Set up a telegram bot. Airtable for listing who has joined your bot. This is needed to send one random recipe a day. Recipe (or other) API. This one uses Spoonacular. I hope you enjoy your bot!
Download and compress folder from S3 to ZIP file
This workflow downloads all files from a specific folder in a S3 Bucket and compresses them so you can download it via n8n or do further processings. Fill in your Credentials and Settings in the Nodes marked with "*". Might serve well as Blueprint or as manual Download for S3 Folders. Since I found it rather tricky to compress all binary files into one zip file I figured might it be an interesting Template. Hint: This is the expression to get every binary key to compress them dynamically. {{ Object.keys($binary).join(',') }} (used in the "Compress"-Node) Enjoy the Workflow! ❤️ https://let-the-work-flow.com Workflow Automation & Development
Generate videos from chat with Google Vertex AI (Veo3)
Generate Videos from Chat with Google Vertex AI (Veo3) - Beginner Friendly Description Turn any text prompt into a short AI-generated video directly from an n8n chat. This workflow connects a chat trigger to Google Vertex AI’s Veo3 model, sending your prompt as input and polling until the rendered video is ready. Once complete, the video is converted into a downloadable file you can use anywhere. Perfect for experimenting with AI-driven media or automating creative video generation inside your workflows. Watch step-by-step guide for these type of workflows here: www.youtube.com/@automatewithmarc How It Works Chat Trigger – Start by typing your prompt into an n8n chat. Post to Vertex AI (Veo3) – Sends the prompt to the Veo3 API with parameters like aspect ratio, duration, and resolution. Wait + Poll Loop – Repeatedly checks the operation status until the video is finished. If + Edit Fields – Extracts the base64 video response and metadata. Convert to File – Turns the video into a binary file for download or use in further automations. Why You’ll Love It ⚡ Generate custom AI videos in minutes. 🗣️ Natural workflow — just type your idea in chat. 🎥 Flexible parameters — adjust resolution, aspect ratio, and duration. 🔗 Ready for integration — feed the output into Google Drive, Slack, or any connected app. Requirements Google Cloud project with Vertex AI API enabled. Google OAuth credentials in n8n. n8n (Cloud or self-hosted).
Implement intelligent message buffering for AI chats with Redis and GPT-4-mini
This workflow solves a critical problem in AI chat implementations: handling multiple rapid messages naturally without creating processing bottlenecks. Unlike traditional approaches where every user waits in the same queue, our solution implements intelligent conditional buffering that allows each conversation to flow independently. Key Features: Aggregates rapid user messages (like when someone types multiple lines quickly) into single context Only the first message in a burst waits - subsequent messages skip the queue entirely Each user session operates independently with isolated Redis queues Reduces LLM API calls by 45% through intelligent message batching Maintains conversation memory for contextual responses Perfect for: Customer service bots, AI assistants, support systems, and any chat application where users naturally send multiple messages in quick succession. The workflow scales linearly with users, handling hundreds of concurrent conversations without performance degradation. Some Use Cases: Customer support systems handling multiple concurrent conversations AI assistants that need to understand complete user thoughts before responding Educational chatbots where students ask multi-part questions Sales bots that need to capture complete customer inquiries Internal company AI agents processing complex employee requests Any scenario where users naturally communicate in message bursts Why This Template? Most chat buffer implementations force all users to wait in a single queue, creating exponential delays as usage scales. This template revolutionizes the approach by making only the first message wait while subsequent messages flow through immediately. The result? Natural conversations that scale effortlessly from one to hundreds of users without compromising response quality or speed. Prerequisites n8n instance (v1.0.0 or higher) Redis database connection OpenAI API key (or alternative LLM provider) Basic understanding of webhook configuration Tags ai-chat, redis, buffer, scalable, conversation, langchain, openai, message-aggregation, customer-service, chatbot
Save Telegram text, voice & audio to Notion with DeepSeek & OpenAI summaries
Tired of manually copying and pasting Telegram messages into Notion? This n8n workflow solves that! What it does: This powerful workflow automates the process of saving your Telegram activity to Notion. Whether it's text chats, important voice memos, or shared audio files, "TeleNotion Scribe" captures it all. But it doesn't stop there! It also leverages AI to generate clear, concise summaries of your messages, giving you instant context and saving you time. Key Features: Seamless Telegram Integration: Automatically triggers on new Telegram messages. Versatile Content Capture: Saves text messages, voice notes, and audio files. AI-Powered Summarization: Get instant summaries of your chats with advanced language models. Notion Database Automation: Creates organized entries in your Notion database. Customizable: Easily adapt the workflow to your specific Notion database structure. Time-Saving: Eliminate manual data entry and streamline your workflow. Improved Organization: Keep all your Telegram information neatly organized in Notion. Who is this for? Project Managers: Track team communications and decisions. Researchers: Log observations and data from chat groups. Note-Takers: Capture meeting discussions and action items. Anyone who wants to save and organize their Telegram chats! What you'll get: The complete n8n workflow JSON file. Stop letting valuable information slip through the cracks. Invest in "TeleNotion Scribe" and transform your Telegram chats into actionable data! Requirements: n8n instance (cloud or self-hosted) Telegram API credentials Notion API integration token OpenAI API key DeepSeek API key
Monitor Twitter accounts & generate intelligence summaries with Gemini AI & Telegram
This workflow automates the process of monitoring Twitter accounts for intelligence gathering. It fetches new tweets from specified accounts via RSS, uses a powerful AI model (Google Gemini) to analyze the content based on your custom prompts, and sends formatted alerts to a Telegram chat for high-priority findings. Key Features: Scheduled Execution: Runs automatically at your desired interval. Dynamic Configuration: Manage which Twitter accounts to follow and what AI prompts to use directly from a Postgres database. AI-Powered Analysis: Leverages Google Gemini to extract summaries, keywords, and assign an importance level to each tweet. Duplicate Prevention: Keeps track of the last processed tweet to ensure you only get new updates. Customizable Alerts: Sends well-structured and easy-to-read notifications to Telegram. Setup Required: Postgres Database: Set up a table to store your configuration (see the Sticky Note in the workflow for the required schema). RSSHub: You need access to an RSSHub instance to convert Twitter user timelines into RSS feeds. Credentials: Add your credentials for Postgres, Google AI (Gemini), and your Telegram Bot in n8n. Configuration: Update the placeholder values in the RSS and Telegram nodes (e.g., your RSSHub URL, your Telegram Chat ID).
Monitor CISA critical vulnerability alerts with RSS feed & Slack notifications
--- How It Works: The 5-Node Monitoring Flow This concise workflow efficiently captures, filters, and delivers crucial cybersecurity-related mentions. Monitor: Cybersecurity Keywords (X/Twitter Trigger) This is the entry point of your workflow. It actively searches X (formerly Twitter) for tweets containing the specific keywords you define. Function: Continuously polls X for tweets that match your specified queries (e.g., your company name, "Log4j," "CVE-2024-XXXX," "ransomware"). Process: As soon as a matching tweet is found, it triggers the workflow to begin processing that information. Format Notification (Code Node) This node prepares the raw tweet data, transforming it into a clean, actionable message for your alerts. Function: Extracts key details from the raw tweet and structures them into a clear, concise message. Process: It pulls out the tweet's text, the user's handle (@screen_name), and the direct URL to the tweet. These pieces are then combined into a user-friendly notificationMessage. You can also include basic filtering logic here if needed. Valid Mention? (If Node) This node acts as a quick filter to help reduce noise and prevent irrelevant alerts from reaching your team. Function: Serves as a simple conditional check to validate the mention's relevance. Process: It evaluates the notificationMessage against specific criteria (e.g., ensuring it doesn't contain common spam words like "bot"). If the mention passes this basic validation, the workflow continues. Otherwise, it quietly ends for that particular tweet. Send Notification (Slack Node) This is the delivery mechanism for your alerts, ensuring your team receives instant, visible notifications. Function: Delivers the formatted alert message directly to your designated communication channel. Process: The notificationMessage is sent straight to your specified Slack channel (e.g., cyber-alerts or security-ops). End Workflow (No-Op Node) This node simply marks the successful completion of the workflow's execution path. Function: Indicates the end of the workflow's process for a given trigger. --- How to Set Up Implementing this simple cybersecurity monitor in your n8n instance is quick and straightforward. Prepare Your Credentials Before building the workflow, ensure all necessary accounts are set up and their respective credentials are ready for n8n. X (Twitter) API: You'll need an X (Twitter) developer account to create an application and obtain your Consumer Key/Secret and Access Token/Secret. Use these to set up your Twitter credential in n8n. Slack API: Set up your Slack credential in n8n. You'll also need the Channel ID of the Slack channel where you want your security alerts to be posted (e.g., security-alerts or it-ops). Import the Workflow JSON Get the workflow structure into your n8n instance. Import: In your n8n instance, go to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code (from the previous response) into the import dialog and import the workflow. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Monitor: Cybersecurity Keywords (X/Twitter): Click on this node. Select your newly created Twitter Credential. CRITICAL: Modify the "Query" parameter to include your specific brand names, relevant CVEs, or general cybersecurity terms. For example: "YourCompany" OR "CVE-2024-1234" OR "phishing alert". Use OR to combine multiple terms. Send Notification (Slack): Click on this node. Select your Slack Credential. Replace "YOURSLACKCHANNEL_ID" with the actual Channel ID you noted earlier for your security alerts. (Optional: You can adjust the "Valid Mention?" node's condition if you find specific patterns of false positives in your search results that you want to filter out.)* Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test: Click the "Test Workflow" button (usually in the top right corner of the n8n editor). This will execute the workflow once. Verify Output: Check your specified Slack channel to confirm that any detected mentions are sent as notifications in the correct format. If no matching tweets are found, you won't see a notification, which is expected. Activate: Once you're satisfied with the test results, toggle the "Active" switch (usually in the top right corner of the n8n editor) to ON. Your workflow will then automatically monitor X (Twitter) at the specified polling interval. ---
My solution for the "Agentic Arena Community Contest" (RAG, Qdrant, Mistral OCR)
🤖📈 This workflow is my personal solution for the Agentic Arena Community Contest, where the goal is to build a Retrieval-Augmented Generation (RAG) AI agent capable of answering questions based on a provided PDF knowledge base. --- Key Advantages ✅ End-to-End RAG Implementation Fully automates the ingestion, processing, and retrieval of knowledge from PDFs into a vector database. ✅ Accuracy through Multi-Layered Retrieval Combines embeddings, Qdrant search, and Cohere reranking to ensure the agent retrieves the most relevant policy information. ✅ Robust Evaluation System Includes an automated correctness evaluation pipeline powered by GPT-4.1 as a judge, ensuring transparent scoring and continuous improvement. ✅ Citation-Driven Compliance The AI agent is instructed to provide citations for every answer, making it suitable for high-stakes use cases like policy compliance. ✅ Scalability and Modularity Can easily integrate with different data sources (Google Drive, APIs, other storage systems) and be extended to new use cases. ✅ Seamless Collaboration with Google Sheets Both the evaluation set and the results are integrated with Google Sheets, enabling easy monitoring, iteration, and reporting. ✅ Cloud and Self-Hosted Flexibility Works with self-hosted Qdrant on Hetzner, Mistral Cloud for OCR, and OpenAI/Cohere APIs, combining local control with powerful cloud AI services. --- How it Works Knowledge Base Ingestion (The "Setup" Execution): When started manually, the workflow first clears an existing Qdrant vector database collection. It then searches a specified Google Drive folder for PDF files. For each PDF found, it performs the following steps: Uploads the file to the Mistral AI API. Processes the PDF using Mistral's OCR service to extract text and convert it into a structured markdown format. Splits the text into manageable chunks. Generates embeddings for each text chunk using OpenAI's model. Stores the embeddings in the Qdrant vector store, creating a searchable knowledge base. Agent Evaluation (The "Testing" Execution): The workflow is triggered by an evaluation Google Sheet containing questions and correct answers. For each question, the core AI Agent is activated. This agent: Uses the RAG tool to search the pre-populated Qdrant vector store for relevant information from the PDFs. Employs a Cohere reranker to refine the search results for the highest quality context. Leverages a GPT-4.1 model to generate an answer based strictly on the retrieved context. The agent's answer is then passed to an "LLM as a Judge" (another GPT-4.1 instance), which compares it to the ground truth answer from the evaluation sheet. The judge provides a detailed score (1-5) based on factual correctness and citation accuracy. Finally, both the agent's answer and the correctness score are saved back to a Google Sheet for review. --- Set up Steps To implement this solution, you need to configure the following components and credentials: Configure Core AI Services: OpenAI API Credentials: Required for the main AI agent, the judge LLM, and generating embeddings. Mistral AI API Credentials: Necessary for the OCR service that processes PDF files. Cohere API Credentials: Used for the reranker node that improves retrieval quality. Google Service Accounts: Set up OAuth for Google Sheets (to read questions and save results) and Google Drive (to access the PDF source files). Set up the Vector Database (Qdrant): This workflow uses a self-hosted Qdrant instance. You must deploy and configure your own Qdrant server. Update the Qdrant Vector Store and RAG nodes with the correct API endpoint URL and credentials for your Qdrant instance. Ensure the collection name (agentic-arena) is created or matches your setup. Connect Data Sources: PDF Source: In the "Search PDFs" node, update the folderId parameter to point to your own Google Drive folder containing the contest PDFs. Evaluation Sheet: In the "Eval Set" node, update the documentId to point to your own copy of the evaluation Google Sheet containing the test questions and answers. Results Sheet: In the "Save Eval" node, update the documentId to point to the Google Sheet where you want to save the evaluation results. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Generate AI photos with Gemini & auto-post to FB, Instagram & X with approval
Social Media Foto Creation Bot with Approval Loop Create & Share AI Photos with Telegram, Gemini & Post to Facebook, Instagram & X Description This n8n workflow turns your Telegram messenger into a complete AI Photo Content Pipeline. You send your photo idea as a text or voice message to a Telegram bot, collaborate with an AI to refine the prompt and social media caption, let Gemini generate the image, and then automatically publish it after your approval to Facebook, Instagram, and X (Twitter) – including status tracking and Telegram confirmations. --- What You Need to Get Started This workflow connects several external services. You will need the following credentials: Telegram Bot API Key Create a bot via BotFather and copy the bot token. This is used by the Listen for incoming events and other Telegram nodes. OpenAI API Key Required for Speech to Text (OpenAI Whisper) to transcribe voice notes. Used by the AI Agent model (OpenAI Chat Model) for prompt creation. Google Gemini API Key Used by the Generate an image node (model: models/gemini-2.5-flash-image) to create the AI image. Google Drive & Sheets Access The generated image is temporarily stored in a Google Drive folder (Upload image1) and later retrieved by Blotato. Prompts and post texts are logged to Google Sheets (Save Prompt & Post-Text) for tracking. Blotato API Key The layer for social media publishing. Uploads the image as a media asset (Upload media1) and creates posts for Facebook, Instagram, and X. --- How the Workflow Operates – Step by Step Input & Initial Processing (Telegram + Voice Handling) This phase receives your messages and prepares the input for the AI. | Node Name | Role in Workflow | | :--- | :--- | | Listen for incoming events | Telegram Trigger node that starts the workflow on any incoming message. | | Voice or Text | Set node that structures the incoming message into a unified text field. | | A Voice? | IF node that checks if the message is a voice note. | | Get Voice File | If voice is detected, this downloads the audio file from Telegram. | | Speech to Text | Uses OpenAI Whisper to convert the voice note into a text transcript. | The output of this stage is always a clean text string containing your image idea. AI Core & Refinement Loop (Prompt + Caption via AI) Here, the AI drafts the image prompt (for Gemini) and the social media caption (for all platforms) and enters an approval loop with you. | Node Name | Role in Workflow | | :--- | :--- | | AI Agent | Central logic agent. Creates a videoPrompt (used for image generation) and socialMediaText based on your idea, and asks for feedback. | | OpenAI Chat Model | The LLM backing the agent (e.g., GPT-4.1-mini). | | Window Buffer Memory | Stores recent turns, allowing the agent to maintain context during revisions. | | Send questions or proposal to user | Sends the AI's suggestion for review back to you. | | Approved from user? | IF node that checks if the output is the approved JSON (meaning you replied with "ok" or "approved"). | | Parse AI Output | Code node that extracts the videoPrompt and socialMediaText fields from the agent’s final JSON output. | Content Generation & Final Approval Once the prompt and caption are set, the image is created and sent to you for final approval before publishing. | Node Name | Role in Workflow | | :--- | :--- | | Inform user about processing | Telegram node to confirm: "Okay. Your image is being prepared now..." | | Save Prompt & Post-Text | Google Sheets node that logs the videoPrompt and socialMediaText. | | Generate an image | Gemini node that creates the image based on the videoPrompt. | | Send a photo message | Sends the generated image to Telegram for review. | | Send message and wait for response | Telegram node that waits for your response to the image (e.g., "Good?" / "Approve"). | | Upload image1 | Temporarily saves the generated image to Google Drive. | | Download image from Drive | Downloads the image back from Drive. | | If1 | IF node that checks if the image was approved in the previous step (approved == true). | Upload & Publishing (Blotato) After final approval, the image is uploaded to Blotato, and post submissions for the social media platforms are created. | Node Name | Role in Workflow | | :--- | :--- | | Upload media1 | Blotato Media node. Uploads the approved image as a media asset and returns a public url. | | Create instagram Post | Creates an Instagram post using the media URL and socialMediaText. | | Create x post | Creates an X (Twitter) post using the media URL and socialMediaText. | | Create FB post | Creates a Facebook post using the media URL and socialMediaText. | Status Monitoring & Retry Loops (X, Facebook, Instagram) An independent loop runs for each platform, polling Blotato until the post is either published or failed. | Node Name | Role in Workflow | | :--- | :--- | | Wait, Wait1, Wait2 | Initial pauses after post creation. | | Check Post Status, Get post1, Check Post Status1 | Blotato Get operations to fetch the current status of the post. | | Published to X?, Published to Facebook?, Published to Instagram? | IF nodes checking for the "published" status. | | Confirm publishing to X, Confirm publishing to Facebook, Confirm publishing to Instagram | Telegram nodes that notify you of successful publication (often including the post link). | | In Progress?, In Progress?1, In Progress?2 | IF nodes that check for "in-progress" status and loop back to the Wait nodes (Give Blotat other 5s). | | Send X Error Message, Send Facebook Error Message, Send Instagram Error Message | Telegram nodes that notify you if a failure occurs. | --- 🛠️ Personalizing Your Content Bot The workflow is highly adaptable to your personal brand and platform preferences: Tweak the AI Prompt & Behavior: Where: In the AI Agent node, within the System Message. Options: Change the tone (casual, professional, humorous) and the level of detail required for the prompt generation or the social media captions. Change Gemini Model or Image Options: Where: In the Generate an image node. Options: Swap the model or adjust image options like Aspect Ratio or Style based on Gemini's API capabilities. Modify Which Platforms You Post To: Where: In the Blotato nodes: Create instagram Post, Create x post, Create FB post. Options: Disable or delete branches for unused platforms, or add new platforms supported by Blotato.
Send daily weather updates via a push notification using Spontit
No description available.
Automate real estate client folder creation with Google Sheets and Drive
What this workflow does This workflow automates backend setup tasks for real estate client portals. When a new property transaction is added to your Google Sheets database with a buyer email but no document folder assigned, the workflow automatically creates a dedicated Google Drive folder, updates the spreadsheet with the folder URL, and adds an initial task prompting the client to upload documents. This automation eliminates manual folder creation and task assignment, ensuring every new transaction has its documentation infrastructure ready from day one. Your clients can access their dedicated folder directly from the portal, keeping all property-related documents organized and accessible in one place. Key benefits Eliminate manual setup: No more creating folders and tasks individually for each transaction Consistent client experience: Every buyer gets the same professional onboarding process Organized documentation: Each transaction has its own Google Drive folder automatically shared with the client Time savings: Focus on closing deals instead of administrative setup Setup requirements Important: You must make a copy of the reference Google Sheets spreadsheet to your own Google account before using this workflow. Your spreadsheet needs at minimum two tabs: Transactions tab: Columns for ID, Buyer Email, Documents URL, Property Address, and Status Tasks tab: Columns for Transaction ID, Task Name, Task Description, and Status Configuration steps Authenticate your Google Sheets and Google Drive accounts in n8n Update the Google Sheets trigger node to point to your copied spreadsheet Set the parent folder ID in the "Create Client Documents Folder" node (where transaction folders should be created) Customize the initial task name and description in the "Add Initial Upload Task" node Verify all sheet names match your spreadsheet tabs The workflow triggers every minute checking for new transactions that meet the criteria (has buyer email, missing documents URL).