AI news research team: 24/7 newsletter automation with citations with Perplexity
[](https://youtu.be/sKJAypXDTLA) Purpose of workflow: This AI-powered workflow automatically generates detailed, well-researched newsletters by monitoring and analyzing specified news topics (like Bitcoin, Nvidia, etc.). It uses a team of AI research agents to gather, analyze, and compile information with automatic citations, saving significant time in newsletter creation. How it works: Multi-agent system: Research Leader: Analyzes topics and creates content outline Project Planner: Breaks down research into specific tasks Research Assistants: Conduct detailed research on assigned subtopics Editor: Combines research and polishes final output Key features: Automated daily monitoring of specified news topics Real-time information gathering using Perplexity AI Auto-citation functionality for source verification Flexible time window filtering (day/week/month) Options for detailed or simple reports Direct email delivery of completed newsletters Step-by-step setup: Perplexity API Setup: Create account at perplexity.ai Navigate to API tab Generate API key Set up credentials with 'Bearer' authentication Workflow Configuration: Connect Google Sheets containing news monitoring topics Configure schedule trigger for daily execution Set up email delivery settings Define report type preferences (detailed/simple) Specify time window for news gathering Integration: Connect with newsletter tools like Kit Import generated content as starting point Edit and customize as needed before publishing
Transfer data from Postgres to Excel
Read data from Postgres Converting it to XLS Save it to disk
Backs up n8n workflows to NextCloud
Temporary solution using the undocumented REST API for backups with file versioning (Nextcloud)
Extract and merge Twitter (X) threads using TwitterAPI.io
Twitter (X) Thread Fetcher: Extract and Merge Tweets from Threads What it does Thread Detection: Automatically detects whether the provided Twitter link is a single tweet or a thread. Tweet Extraction: Fetches and returns the content of a single tweet, or gathers all tweets in a thread by retrieving the first tweet and all connected replies. Thread Merging: Merges all tweets in the correct order to reconstruct the complete thread, filtering out any empty results. Seamless Integration: Easily trigger this workflow from other n8n workflows to automate Twitter thread extraction from various sources. How it works Accepts a Twitter link as input-either a single tweet or a thread. If the link is for a single tweet, fetches and returns the tweet content. If the link is for a thread, fetches the first tweet, then iteratively retrieves all connected replies that form the thread, ensuring only relevant tweets are included. Merges the first tweet and all subsequent thread tweets in order, filters out any empty results, and returns the complete thread. Uses twitterapi.io for all Twitter API requests. Set up steps Setup typically takes just a few minutes. You’ll need to configure your Twitter API credentials for twitterapi.io. You can trigger this workflow manually for testing or call it from another workflow to automate thread fetching from sources like Notion, spreadsheets, or other platforms. For best results, create a separate workflow to gather Twitter links from your preferred source, then trigger this workflow to fetch and return the full thread. > Detailed configuration instructions and node explanations are included as sticky notes within the workflow canvas. Benefits Light speed: Fetches a 15-tweet thread in just 3 seconds for rapid results. Cost effective: Processes a 15-tweet thread for only $0.0027, making it highly affordable. (Cost may vary depending on the density of replies in the thread.)
Monitor changes in Google Sheets every 45 mins
Based on your use case, you might want to trigger a workflow if new data gets added to your database. This workflow allows you to send a message to Mattermost when new data gets added in Google Sheets. The Interval node triggers the workflow every 45 minutes. You can modify the timing based on your use case. You can even use the Cron node to trigger the workflow. If you wish to fetch new Tweets from Twitter, replace the Google Sheet node with the respective node. Update the Function node accordingly.
Send an email using AWS SES
Companion workflow for AWS SES node docs
Generate 3D models & textures from images with Hunyuan3D AI
Generate 3D Models & Textures from Images with Hunyuan3D AI This workflow connects n8n → Replicate API to generate 3D-like outputs using the ndreca/hunyuan3d-2.1-test model. It handles everything: sending the request, waiting for processing, checking status, and returning results. --- ⚡ Section 1: Trigger & Setup ⚙️ Nodes 1️⃣ On Clicking “Execute” What it does: Starts the workflow manually in n8n. Why it’s useful: Great for testing or one-off runs before automation. 2️⃣ Set API Key What it does: Stores your Replicate API Key. Why it’s useful: Keeps authentication secure and reusable across HTTP nodes. --- 💡 Beginner Benefit No coding needed — just paste your API key once. Easy to test: press Execute, and you’re live. --- 🤖 Section 2: Send Job to Replicate ⚙️ Nodes 3️⃣ Create Prediction (HTTP Request) What it does: Sends a POST request to Replicate’s API with: Model version (70d0d816...ae75f) Input image URL Parameters like steps, seed, generatetexture, removebackground Why it’s useful: This kicks off the AI generation job on Replicate’s servers. 4️⃣ Extract Prediction ID (Code) What it does: Grabs the prediction ID from the API response and builds a status-check URL. Why it’s useful: Every job has a unique ID — this lets us track progress later. --- 💡 Beginner Benefit You don’t need to worry about JSON parsing — the workflow extracts the ID automatically. Everything is reusable if you run multiple generations. --- ⏳ Section 3: Poll Until Complete ⚙️ Nodes 5️⃣ Wait (2s) What it does: Pauses for 2 seconds before checking the job status. Why it’s useful: Prevents spamming the API with too many requests. 6️⃣ Check Prediction Status (HTTP Request) What it does: GET request to see if the job is finished. 7️⃣ Check If Complete (IF Node) What it does: If status = succeeded → process results. If not → loops back to Wait and checks again. --- 💡 Beginner Benefit Handles waiting logic for you — no manual refreshing needed. Keeps looping until the AI job is really done. --- 📦 Section 4: Process the Result ⚙️ Nodes 8️⃣ Process Result (Code) What it does: Extracts: status output (final generated file/URL) metrics (performance stats) Timestamps (createdat, completedat) Model info Why it’s useful: Packages the response neatly for storage, email, or sending elsewhere. --- 💡 Beginner Benefit Get clean, structured data ready for saving or sending. Can be extended easily: push output to Google Drive, Notion, or Slack. --- 📊 Workflow Overview | Section | What happens | Key Nodes | Benefit | | --------------------- | --------------------------------- | ----------------------------- | --------------------------------- | | ⚡ Trigger & Setup | Start workflow + set API key | Manual Trigger, Set | Easy one-click start | | 🤖 Send Job | Send input & get prediction ID | Create Prediction, Extract ID | Launches AI generation | | ⏳ Poll Until Complete | Waits + checks status until ready | Wait, Check Status, IF | Automated loop, no manual refresh | | 📦 Process Result | Collects output & metrics | Process Result | Clean result for next steps | --- 🎯 Overall Benefits ✅ Fully automates Replicate model runs ✅ Handles waiting, retries, and completion checks ✅ Clean final output with status + metrics ✅ Beginner-friendly — just add API key + input image ✅ Extensible: connect results to Google Sheets, Gmail, Slack, or databases --- ✨ In short: This is a no-code AI image-to-3D content generator powered by Replicate and automated by n8n. ---
Create daily newsletter digests from Gmail using GPT-4.1-mini
Summary Every day at a set time, this workflow fetches yesterday’s newsletters from Gmail, summarizes each email into concise topics with an LLM, merges all topics, renders a clean HTML digest, and emails it to your inbox. What this workflow does Triggers on a daily schedule (default 16:00, server time) Fetches Gmail messages since yesterday using a custom search query with optional sender filters Retrieves and decodes each email’s HTML, subject, sender name, and date Prompts an LLM (GPT‑4.1‑mini) to produce a consistent JSON summary of topics per email Merges topics from all emails into a single list Renders a styled HTML email with enumerated items Sends the HTML digest to a specified recipient via Gmail Apps and credentials Gmail OAuth2: Gmail account (read and send) OpenAI: OpenAi account Typical use cases Daily/weekly newsletter rollups delivered as one email Curated digests from specific media or authors Team briefings that are easy to read and forward How it works (node-by-node) Schedule Trigger Fires at the configured hour (default 16:00). Get many messages (Gmail → getAll, returnAll: true) Uses a filter like: =(from:@.com) OR (from:@.com) OR (from:@.com -"") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Returns a list of message IDs from the past day. Loop Over Items (Split in Batches) Iterates through each message ID. Get a message (Gmail → get) Retrieves the full message/payload for the current email. Get message data (Code) Extracts HTML from Gmail’s MIME parts. Normalizes the sender to just the display name. Formats the date as DD.MM.YYYY. Passes html, subject, from, date forward. Clean (Code) Converts DD.MM.YYYY → MM.DD (for prompt brevity). Passes html, subject, from, date to the LLM. Message a model (OpenAI, model: gpt‑4.1‑mini, JSON output) Prompt instructs: Produce JSON: { "topics": [ { "title", "descr", "subject", "from", "date" } ] } Split multi-news blocks into separate topics Combine or ignore specific blocks for particular senders (placeholders ) Keep subject untranslated; other values in language Injects subject/from/date/html from the current email Loop Over Items (continues) Processes all emails for the time window. Merge (Code) Flattens the topics arrays from all processed emails into one combined topics list. Create template (Code) Builds a complete HTML email: Enumerated items with title, one-line description Original subject and “from — date” Safely escapes HTML and preserves line breaks Inline, email-friendly styles Send a message (Gmail → send) Sends the final HTML to your recipient with a custom subject. Node map | Node | Type | Purpose | |---|---|---| | Schedule Trigger | Trigger | Run at a specific time each day | | Get many messages | Gmail (getAll) | Search emails since yesterday with filters | | Loop Over Items | Split in Batches | Iterate messages one-by-one | | Get a message | Gmail (get) | Fetch full message payload | | Get message data | Code | Extract HTML/subject/from/date; normalize sender and date | | Clean | Code | Reformat date and forward fields to LLM | | Message a model | OpenAI | Summarize email into JSON topics | | Merge | Code | Merge topics from all emails | | Create template | Code | Render a styled HTML email digest | | Send a message | Gmail (send) | Deliver the digest email | Before you start Connect Gmail OAuth2 in n8n (ensure it has both read and send permissions) Add your OpenAI API key Import the provided workflow JSON into n8n Setup instructions 1) Schedule Schedule Trigger node: Set your preferred hour (server time). Default is 16:00. 2) Gmail Get many messages: Adjust filters.q to your senders/labels and window: Example: =(from:news@publisher.com) OR (from:briefs@media.com -"promo") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} You can use label: or category: to narrow scope. Send a message: sendTo = your email subject = your subject line message = set to {{ $json.htmlBody }} (already produced by Create template) The HTML body uses inline styles for broad email client support. 3) OpenAI Message a model: Model: gpt‑4.1‑mini (swap to gpt‑4o‑mini or your preferred) Update prompt placeholders: language → your target language sender rules → special cases (combine blocks, ignore sections) How to use The workflow runs daily at the scheduled time, compiling a digest from yesterday’s emails. You’ll receive one HTML email with all topics neatly listed. Adjust the time window or filters to change what gets included. Customization ideas Time window control: after: {{ $now.minus({ days: X }) }} and/or add before: Filter by labels: q = label:Newsletters after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Language: Set the language in the LLM prompt Template: Edit “Create template” to add a header, footer, hero section, logo/branding Include links parsed from HTML (add an HTML parser step in “Get message data”) Subject line: Make dynamic, e.g., “Digest for {{ $now.toFormat('dd.MM.yyyy') }}” Sender: Use a dedicated Gmail account or alias for deliverability and separation Limits and notes Gmail size limit for outgoing emails is ~25 MB; large digests may need pruning LLM usage incurs cost and latency proportional to email size and count HTML rendering varies across clients; inline styles are used for compatibility Schedule uses the n8n server’s timezone; adjust if your server runs in a different TZ Privacy and safety Emails are sent to OpenAI for summarization—ensure this aligns with your data policies Limit the Gmail search scope to only the newsletters you want processed Avoid including sensitive emails in the search window Sample output (email body) Title 1 One-sentence description Original Subject → Sender — DD.MM.YYYY Title 2 One-sentence description Original Subject → Sender — DD.MM.YYYY Tips and troubleshooting No emails found? Check filters.q and the time window (after:) Model returns empty JSON? Simplify the prompt or try another model Odd characters in output? The template escapes HTML and preserves line breaks; verify your input encoding Delivery issues? Use a verified sender, set a clear subject, and avoid spammy keywords Tags gmail, openai, llm, newsletters, digest, summarization, email, automation Changelog v1: Initial release with scheduled time window, sender filters, LLM summarization, topic merging, and HTML email template rendering
Auto-reply to FAQs on WhatsApp using Postgres (Module "FAQ")
Who is this for? This workflow is designed for businesses or individuals who want to automate responses to frequently asked questions via WhatsApp, while managing their question-and-answer database using Postgres. What problem is this workflow solving? This workflow addresses the challenge of efficiently managing and automating responses to customer inquiries. It helps reduce manual effort and ensures quick access to information, while providing an option for customers to request live assistance when needed. What this workflow does It allows you to store questions and answers in a Postgres database. A link to a bot is shared with customers, enabling them to read the available questions and answers. If a customer does not find an answer to their query, they can request a consultation with a manager. Setup Create tables in Postgres DB Replace "n8n" in the provided SQL script with the name of your schema in the database. Run the SQL script in your database to set up the necessary tables. (The script is available in the workflow.) Add Credentials Set up the WhatsApp integration (OAuth and Account credentials). Connect the Postgres database by adding the necessary credentials. How to customize this workflow to your needs Modify the Postgres schema name in the SQL script to match your database configuration. Update the questions and answers in the database to suit the information you want to provide via the bot. Customize the WhatsApp integration settings to match your account credentials and API details.
Repurpose YouTube videos and publish via Blotato with Telegram, Sheets and GPT-4.1-mini
💥 Automate YouTube Video Creation and Publishing with Blotato Who is this for? This workflow is designed for YouTube creators, content marketers, automation builders, and agencies who want to repurpose existing YouTube videos into new original content and automate the publishing process. It is especially useful for users already working with Telegram, Google Sheets, OpenAI, and Blotato. --- What problem is this workflow solving? / Use case Creating YouTube content at scale is time-consuming: extracting ideas from existing videos, rewriting scripts, generating SEO metadata, tracking content, and publishing videos all require manual work across multiple tools. This workflow solves that by: Automating content analysis and rewriting Centralizing tracking and approvals in Google Sheets Automating YouTube publishing via Blotato --- What this workflow does This workflow automates the full YouTube video repurposing and publishing pipeline: Receives a YouTube video URL and instructions via Telegram Logs the request in Google Sheets Extracts the YouTube video ID Retrieves the video transcript via RapidAPI Cleans and normalizes the transcript Generates a new original video script using OpenAI Generates SEO metadata (title, description, tags) in strict JSON format Updates Google Sheets with the generated content Waits for approval (status = ready) Uploads the final video to Blotato Publishes the video on YouTube Updates the status to publish in Google Sheets --- Setup To use this workflow, you need to configure the following services: Google Services Enable Google Sheets API in Google Cloud Console Create OAuth2 credentials Add credentials in n8n: Google Sheets OAuth2 API Credential name: Google Sheets account My Google Sheets : copy RapidAPI (YouTube Transcript) Sign up at RapidAPI Subscribe to "YouTube Video Summarizer GPT AI" Get your API key Update in Workflow Configuration node BLOTATO (Video Publishing) Sign up at BLOTATO Get API credentials Add credentials in n8n: Blotato API Credential name: Blotato account Connect your YouTube account via BLOTATO --- How to customize this workflow to your needs You can easily adapt this workflow by: Changing the output language (output_lang) in the configuration node Modifying the OpenAI prompts to match your tone or niche Adjusting Google Sheets columns or approval logic Replacing YouTube with another platform supported by Blotato Extending the workflow to generate shorts, reels, or multi-platform posts The workflow is modular and designed to be extended without breaking the core logic. 🎥 Watch This Tutorial --- 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n --- 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
Convert URL-encoded webhook data from amoCRM to structured array
This template processes webhooks received from amoCRM in a URL-encoded format and transforms the data into a structured array that n8n can easily interpret. By default, n8n does not automatically parse URL-encoded webhook payloads into usable JSON. This template bridges that gap, enabling seamless data manipulation and integration with subsequent processing nodes. Key Features: Input Handling: Processes URL-encoded data received from amoCRM webhooks. Data Transformation: Converts complex, nested keys into a structured JSON array. Ease of Use: Simplifies access to specific fields for further workflow automation. Setup Guide: Webhook Trigger Node: Configure the Webhook Trigger node to receive data from amoCRM. URL-Encoding Parsing: Use the provided nodes to transform the input URL-encoded data into a structured array. Access Transformed Data: Use the resulting JSON structure for subsequent nodes in your workflow, such as filtering, updating records, or triggering external systems. Example Data Transformation: Sample Input (URL-Encoded): The following input format is typically received from amoCRM: $json.body['leads[update][0][custom_fields][0][id]'] Output (Structured JSON): After processing, the data is transformed into an easily accessible JSON array format: {{ $json.leads.update[‘0’].id }} This output allows you to work with clean, structured JSON, simplifying field extraction and workflow continuation. Code Explanation: This workflow parses URL-encoded key-value pairs using n8n nodes to restructure the data into a nested JSON object. By doing so, the template improves transparency, ensures data integrity, and makes further automation tasks straightforward.
📦 AI-powered damage reporting tool for logistics with Gmail, Telegram and GPT
Tags: Logistics, Supply Chain, Warehouse Operations, Paperless processes, Quality Management Context Hi! I’m Samir — Supply Chain Engineer, Data Scientist based in Paris, and founder of LogiGreen. > Let us use n8n to help small companies digitalise their logistics and supply chain! This workflow helps warehouse operators generate a complete damage report without needing to write anything manually. [](https://youtu.be/3Xdo1pzd8rw) In warehouse operations, damaged pallets must be reported quickly and consistently. You can automate the entire process using AI to analyse photos of the damages. 📬 For business inquiries, you can find me on LinkedIn Example of damage report The process starts with instructions sent with the operator: [](https://youtu.be/3Xdo1pzd8rw) A photo of the damaged pallets is shared with the bot: A complete report is generated and sent by email: [](https://youtu.be/3Xdo1pzd8rw) 🎥 Tutorial A complete tutorial (with explanations of every node) is available on YouTube: [](https://youtu.be/3Xdo1pzd8rw) Who is this template for? This template is ideal for companies with limited IT ressources: Warehouse operators who need a fast reporting tool Quality teams who want consistent and structured reports 3PLs and logistics providers looking to digitalise damage claims Manufacturers and retailers with high inbound pallet volumes Anyone using Telegram on the warehouse floor for quick interactions What does this workflow do? This workflow acts as an AI-powered damaged goods reporting assistant using Telegram, OpenAI Vision and Gmail. A operator sends a picture of the damaged pallet via Telegram. The workflow downloads the image and sends it to GPT-4o for damage analysis. The bot replies and asks for a photo of the pallet barcode. The barcode picture is processed by GPT-4o Mini to extract the pallet number. The workflow combines both results (damage analysis + pallet ID). It generates an HTML email report with: damage summary, observed issues, severity level and recommended actions The report is automatically sent via Gmail to the configured recipient. The operator receives a confirmation message in Telegram. The processes does not require any data input form the operator, only to take pictures! Next Steps Before running the workflow, follow the sticky notes and configure: Connect your Telegram Bot API Add your OpenAI API Key in the AI nodes Connect your Gmail credentials Update the recipient email in the “Send Report by Email” node Submitted: 20 November 2025 Template designed with n8n version 1.116.2