Clone viral TikToks with AI avatars & auto-post to 9 platforms using Perplexity & Blotato
Clone a viral TikTok with AI and auto-post it to 9 platforms using Perplexity & Blotato Who is this for? This workflow is perfect for: Content creators looking to repurpose viral content Social media managers who want to scale short-form content across multiple platforms Entrepreneurs and marketers aiming to save time and boost visibility with AI-powered automation What problem is this workflow solving? Reproducing viral video formats with your own branding and pushing them to multiple platforms is time-consuming and hard to scale. This workflow solves that by: Cloning a viral TikTok video’s structure Generating a new version with your avatar Rewriting the script, caption, and overlay text Auto-posting it to 9 social media platforms — without manual uploads What this workflow does From a simple Telegram message with a TikTok link, the workflow: Downloads a TikTok video and extracts its thumbnail, audio, and caption Transcribes the audio and saves original text into Google Sheets Uses Perplexity AI to suggest a new content idea in the same niche Rewrites the script, caption, and overlay using GPT-4o Generates a new video with your avatar using Captions.ai Adds subtitles and overlay text with JSON2Video Saves metadata to Google Sheets for tracking Sends the final video to Telegram for preview Auto-publishes the video to Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, X (Twitter), Pinterest, and Bluesky via Blotato Setup Connect your Telegram bot to the trigger node. Add your OpenAI, Perplexity, Cloudinary, Captions.ai, and Blotato API keys. Make sure your Google Sheet is ready with the appropriate columns. Replace the default avatar name in the Captions.ai node with yours. Fill in your social media account IDs in the "Assign Platform IDs" node. Test by sending a TikTok URL to your Telegram bot. How to customize this workflow to your needs Change avatar output style: adjust resolution, voice, or avatar ID. Refine script structure: tweak GPT instructions for different tone/format. Swap Perplexity with ChatGPT or Claude if needed. Filter by platform: disable any Blotato nodes you don’t need. Add approval step: insert a Telegram confirmation node before publishing. Adjust subtitle style or overlay text font in JSON2Video. 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube
Send multiple emails in Gmail directly via Google Sheets
Send multiple emails in Gmail directly via Google Sheets [Video for workflow process](https://www.canva.com/design/DAF8VnLBBQA/BKog1CSHs7goAYse3mEzQ/watch?utmcontent=DAF8VnLBBQA&utmcampaign=designshare&utmmedium=link&utm_source=editor ) In today's fast-paced digital world, businesses are constantly seeking ways to streamline their processes and enhance customer engagement. One powerful tool that facilitates these goals is n8n, an automation platform that allows users to create workflows to automate tasks and workflows. Benefits of the Workflow: Efficiency: By automating the process of sending emails to customers based on data from Google Sheets, this n8n workflow significantly reduces manual effort and saves time. Accuracy: The workflow ensures that emails are sent to the right recipients at the right time by filtering items based on specific conditions and the current date. Personalization: Personalized emails can be sent to customers based on the information provided in the Google Sheet, resulting in enhanced customer engagement. Real-time Updates: The workflow updates the Google Sheet with the status of the sent emails, providing real-time insights into the communication process. Consistency: Through automation, this workflow helps maintain consistency in communication with customers, ensuring a seamless experience. Workflow Overview: The workflow begins with the "Google Sheets Trigger" node, which monitors a specified Google Sheet for new row additions. Upon detection of a new row, the workflow progresses to the "Filter Status (Waiting for sending)" node, where items are filtered based on specific conditions. Subsequently, the workflow moves to the "Filter Items by Current Date" node, which filters items based on the current date. Items matching the current date are then processed further. The filtered items are then forwarded to the "Gmail" node, where personalized emails are composed and sent to recipients based on the Google Sheet data. Finally, the workflow updates the Google Sheet using the "Google Sheets" node with the status of the sent emails and other relevant information. Copy this template to get started : Google Sheets Workflow Nodes Documentation: Schedule Trigger Filter Items by Current Date Gmail Google Sheets Filter Status (Waiting for sending) Set data Merge feild Conclusion: In conclusion, this n8n workflow presents a powerful solution for automating email communication processes based on Google Sheets data. By leveraging automation, businesses can enhance their operational efficiency, accuracy, and customer engagement. The seamless integration of nodes in this workflow streamlines the communication process and ensures timely and personalized interactions with customers. As businesses continue to prioritize efficiency and customer satisfaction, n8n workflows offer a versatile and effective means to achieve these objectives.
ChatGPT Automatic Code Review in Gitlab MR
Who this template is for This template is for every engineer who wants to automate their code reviews or just get a 2nd opinion on their PR. How it works This workflow will automatically review your changes in a Gitlab PR using the power of AI. It will trigger whenever you comment with +0 to a Gitlab PR, get the code changes, analyze them with GPT, and reply to the PR discussion. Set up Steps Set up webhook of note_events in Gitlab repository (see here on how to do it) Configure ChatGPT credentials Note "+0" in MergeRequest to trigger automatic review by ChatGPT
Automate blog creation in brand voice with AI
This n8n template demonstrates a simple approach to using AI to automate the generation of blog content which aligns to your organisation's brand voice and style by using examples of previously published articles. In a way, it's quick and dirty "training" which can get your automated content generation strategy up and running for very little effort and cost whilst you evaluate our AI content pipeline. How it works In this demonstration, the n8n.io blog is used as the source of existing published content and 5 of the latest articles are imported via the HTTP node. The HTML node is extract the article bodies which are then converted to markdown for our LLMs. We use LLM nodes to (1) understand the article structure and writing style and (2) identify the brand voice characteristics used in the posts. These are then used as guidelines in our final LLM node when generating new articles. Finally, a draft is saved to Wordpress for human editors to review or use as starting point for their own articles. How to use Update Step 1 to fetch data from your desired blog or change to fetch existing content in a different way. Update Step 5 to provide your new article instruction. For optimal output, theme topics relevant to your brand. Requirements A source of text-heavy content is required to accurately breakdown the brand voice and article style. Don't have your own? Maybe try your competitors? OpenAI for LLM - though I recommend exploring other models which may give subjectively better results. Wordpress for blog but feel free to use other preferred publishing platforms. Customising this workflow Ideally, you'd want to "train" your agent on material which is similar to your output ie. your social media post may not get the best results from your blog content due to differing formats. Typically, this brand voice extraction exercise should run once and then be cached somewhere for reuse later. This would save on generation time and overall cost of the workflow.
🛻 AI agent for logistics order processing with GPT-4o, Gmail and Google Sheet
Tags: Supply Chain, Logistics, AI Agents Context Hey! I’m Samir, a Supply Chain Data Scientist from Paris, and the founder of LogiGreen Consulting. We design tools to help companies improve their logistics processes using data analytics, AI, and automation—to reduce costs and minimize environmental impacts. >Let’s use N8N to improve logistics operations! 📬 For business inquiries, you can add me on LinkedIn Who is this template for? This workflow template is designed for logistics or manufacturing operations that receive orders by email. [](https://www.youtube.com/watch?v=kQ8dO_30SB0) The example above illustrate the challenge we want to tackle using an AI Agent to parse the information and load them in a Google sheet. If you want to understand how I built this workflow, check my detailed tutorial: [](https://www.youtube.com/watch?v=kQ8dO_30SB0) 🎥 Step-by-Step Tutorial How does it work? The workflow is connected to a Gmail Trigger to open all the emails that include Inbound Order in their subject. The email is parsed by an AI Agent equipped with OpenAI's GPT to collect all the information. The results are pulled in a Google Sheet. [](https://www.youtube.com/watch?v=kQ8dO_30SB0) These orderlines can then be transferred to warehouse teams to prepare *order receiving. What do I need to get started? You’ll need: Gmail and Google Drive Accounts with the API credentials to access it via n8n An OpenAI API key (GPT-4o) for the chat model. A Google Sheet with these columns: PONUMBER, EXPECTEDDELIVERY DATE, SKU_ID, QUANTITY Next Steps Follow the sticky notes in the workflow to configure each node and start using AI to support your logistic operations. 🚀 Curious how N8N can transform your logistics operations? 📬 Let’s connect on LinkedIn Notes An example of email is included in the template so you can try it with your mailbox. This workflow was built using N8N version 1.82.1 Submitted: March 28, 2025
Extract text from images with Telegram Bot & OCR Tesseractjs
Description This n8n workflow enables users to send an image to a Telegram bot and receive the extracted text using Tesseract OCR (via the n8n-nodes-tesseractjs Community Node). It's a quick and straightforward way to convert images into readable text directly through chat. How it Works The workflow listens for new image messages coming in via the Telegram bot. Once an image is received, it downloads the image file from Telegram (which initially arrives as application/octet-stream). The image data, now properly identified, is then sent to the Tesseract OCR node to extract the text. Finally, the recognized text is sent back as a reply to the Telegram user. Setup Steps Install Community Node: Ensure you have installed n8n-nodes-tesseractjs in your n8n instance. Connect Telegram Bot: Configure the Telegram Trigger node with your Telegram bot. Bot Token: Add your Telegram bot token to the Send Message node to send replies. Deploy & Test: Activate (deploy) the workflow and send an image to your Telegram bot to test.
Web site scraper for LLMs with Airtop
Recursive Web Scraping Use Case Automating web scraping with recursive depth is ideal for collecting content across multiple linked pages—perfect for content aggregation, lead generation, or research projects. What This Automation Does This automation reads a list of URLs from a Google Sheet, scrapes each page, stores the content in a document, and adds newly discovered links back to the sheet. It continues this process for a specified number of iterations based on the defined scraping depth. Input Parameters: Seed URL: The starting URL to begin the scraping process. Example: https://example.com/ Links must contain: Restricts the links to those that contain this specified string. Example: https://example.com/ Depth: The number of iterations (layers of links) to scrape beyond the initial set. Example: 3 How It Works Starts by reading the Seed URL from the Google Sheet. Scrapes each page and saves its content to the specified document. Extracts new links from each page that match the Links must contain string, appends them to the Google Sheet. Repeats steps 2–3 for the number of times specified by Depth - 1. Setup Requirements Airtop API Key — free to generate. Credentials set up for Google Docs (requires creating a project on Google Console). Read how to. Credentials set up for Google Spreadsheet. Next Steps Add Filtering Rules: Filter which links to follow based on domain, path, or content type. Combine with Scheduler: Run this automation on a schedule to continuously explore newly discovered pages. Export Structured Data: Extend the process to store extracted data in a CSV or database for analysis. Read more about website scraping for LLMS
AI email assistant: Prioritize Gmail with ChatGPT summaries and Slack digests
This n8n workflow acts as an AI-powered Inbox Assistant that automatically summarizes and classifies Gmail emails, prioritizes important messages, and sends a daily digest to Slack. It’s ideal for startup founders and small teams juggling investor intros, customer leads, and support queries — all from a busy Gmail inbox. Each email is processed using ChatGPT to generate a concise summary, classify the message (e.g., Support, Investor, Spam), and determine its urgency. High and medium priority messages are forwarded to Slack instantly. Lower priority emails are logged to Google Sheets for review. A daily 7 PM digest summarizes the day’s most important messages. 💡 Use Cases Preventing missed investor or lead emails Lightweight CRM alternative using Google Sheets Slack summaries of critical Gmail activity 🔧 How It Works Gmail node fetches new messages ChatGPT summarizes and extracts urgency + type High/medium urgency → sent to Slack + labeled in Gmail Low urgency → logged in Google Sheets Cron node triggers a daily 7 PM Slack summary ✅ Requirements OpenAI API Key (GPT-4 or GPT-4o recommended) Gmail access with read and label permission Slack Bot Token or Webhook URL Google Sheets integration (optional) 🛠 Customization Ideas Replace Slack with Telegram or WhatsApp Route investor leads to Airtable or Notion Add multi-language support in ChatGPT prompt Create weekly summaries via email
Build a WhatsApp assistant with memory, Google Suite & multi-AI research and imaging
The "WhatsApp Productivity Assistant with Memory and AI Imaging" is a comprehensive n8n workflow that transforms your WhatsApp into a powerful, multi-talented AI assistant. It's designed to handle a wide range of tasks by understanding user messages, analyzing images, and connecting to various external tools and services. The assistant can hold natural conversations, remember past interactions using a MongoDB vector store (RAG), and decide which tool is best suited for a user's request. Whether you need to check your schedule, research a topic, get the latest news, create an image, or even analyze a picture you send, this workflow orchestrates it all seamlessly through a single WhatsApp chat interface. The workflow is structured into several interconnected components: WhatsApp Trigger & Incoming Message Processing: This is the entry point, starting when a message (text or image) is received via WhatsApp. A Route Message by Type (Image/Text) node then intelligently routes the message based on its content type. A Typing.... node sends a typing indicator to the user for a better experience. If an image is received, it's downloaded, processed via an HTTP Request, and analyzed by the Analyze image node. The Code1 node then standardizes both text and image analysis output into a single, unified input for the main AI agent. Core AI Agent: This is the brain of the operation. The AI Agent1 node receives the user's input, maintains short-term conversational memory using Simple Memory, and uses a powerful language model (gpt-oss-120b2 or gpt-oss-120b1) to decide which tool or sub-agent to use. It orchestrates all the other agents and tools. Productivity Tools Agent: This group of nodes connects the assistant to your personal productivity suite. It includes sub-agents and tools for managing Google Calendar, Google Tasks, and Gmail, allowing you to schedule events, manage to-dos, and read emails. It leverages a language model (gpt-4.1-mini or gemini-2.5-flash) for understanding and executing commands within these tools. Research Tool Agent: This agent handles all research-related queries. It has access to multiple search tools (Brave Web Search, Brave News Search, Wikipedia, Tavily, and a custom perprlexcia search) to find the most accurate and up-to-date information from the web. It uses a language model (gpt-oss-120b or gpt-4.1-nanoChat Model1) for reasoning. Long-Term Memory Webhook: A dedicated sub-workflow (Webhook2) that processes conversation history, extracts key information using Extract Memory Info, and stores it in a MongoDB Atlas Vector Store for long-term memory. This allows the AI agent to remember past preferences and facts. Image Generation Webhook: A specialized sub-workflow (Webhook3) triggered when a user asks to create an image. It uses a dedicated AI Agent with MongoDB Atlas Vector Store1 for contextual image prompt generation, Clean Prompt Text1 to refine the prompt, an HTTP Request to an external image generation API (e.g., Together.xyz), and then converts and sends the generated image back to the user via WhatsApp. --- Use Cases Personal Assistant: Schedule appointments, create tasks, read recent emails, and manage your daily agenda directly from WhatsApp. Information Retrieval: Ask any factual, news, or research-based question and get real-time answers from various web sources. Creative Content Generation: Request the AI to generate images based on your descriptions for logos, artwork, or social media content. Smart Communication: Engage in natural, contextual conversations with an AI that remembers past interactions. Image Analysis: Send an image and ask the AI to describe its contents or answer questions about it. --- Pre-conditions Before importing and running this template, you will need: Self-hosted n8n Instance: This template requires a self-hosted n8n instance as it uses webhooks that need public accessibility. WhatsApp Business Account: A Meta Developer Account configured for WhatsApp Business Platform API access. MongoDB Atlas Account: A MongoDB Atlas cluster with a database and collection set up for the vector store. Google Cloud Project: Configured with API access for Google Calendar, Google Tasks, and Gmail. API Keys/Accounts for: OpenWeatherMap: For weather forecasts. Groq, OpenRouter, or Vercel AI Gateway: For various Language Models (e.g., gpt-oss-120b, gpt-5-nano, gpt-4o-mini). Mistral Cloud: For embedding models (e.g., codestral-embed-2505). Brave Search: For web and news searches. Tavily API: For structured search results. Together.xyz or similar Image Generation API: For creating images. Perplexity API (or self-hosted instance): For the perprlexcia tool (the current URL http://self hoseted perplexcia/api/search implies a self-hosted or custom endpoint). Publicly Accessible URLs: Your n8n instance and any custom webhook endpoints (like perprlexcia) must be publicly accessible. --- Requirements (n8n Credentials) You will need to set up the following credentials within your n8n instance: WhatsApp OAuth account: For the WhatsApp Trigger node. WhatsApp account: For Send message2, Send message3, Download media, and Typing.... nodes. Google Palm Api account: For Analyze image, Google Gemini Chat Model, gemini-2.5-flash, and Google Gemini Chat Model5 nodes. OpenWeatherMap account: For the Get Weather Forecast node. Groq account: For gpt-oss-120b node. Google Calendar OAuth2Api account: For the Google Calendar tools. MongoDB account: For MongoDB Atlas Vector Store nodes. OpenRouter account: For gpt-5-nano and gpt-4.1-nanoChat Model1 nodes. Gmail account : For Get many messages and Get a message nodes (ensure correct Gmail OAuth2 setup for each). Google Tasks account: For the Google Tasks tools. Bearer Auth account: For HTTP Request5 (used in media download). Brave Search account: For Brave Web Search and Brave News Search nodes. Vercel Ai Gateway Api account: For gpt-4.1-mini, gpt-oss-120b, gpt-oss-120b2, and gpt-4.1-nano nodes. HTTP Header Auth account: For Tavily web search (create a new one named "Tavily API Key" with Authorization: Bearer YOURTAVILYAPI_KEY) and HTTP Request (for Together.xyz, e.g., "Together.xyz API Key"). Mistral Cloud account: For codestral-embed-2505, codestral-embed-, and codestral-embed-2506 nodes.
Send a message to Telegram on a new item saved to Reader
What is it This workflow aims to build a simple bot that will send a message to a telegram channel every time there is a new saved item to the Reader. This workflow can be easily modify to support other way of sending the notification, thanks to existing n8n nodes. Warning: This is only for folks who already have access to the Reader, it won't work if you don't Also, this workflow use a file to store the last update time in order to not sync everything everytime. Setup The config node : It contains the telegram channel id It also contains the file used as storage To get the header auth, you have to : Go to the reader Open the devtools, Option + ⌘ + J (on macOS), or Shift + CTRL + J (on Windows/Linux) Go to network and find a profile_details/ request, click on it Go to Request Headers Copy the value for "Cookie" In n8n, set the name of the Header auth account to Cookie and the value with the one you copied before
🏛️ Daily US Congress members stock trades report via Firecrawl + OpenAI + Gmail
📬 What This Workflow Does This workflow automatically scrapes recent high-value congressional stock trades from Quiver Quantitative, summarizes the key transactions, and delivers a neatly formatted report to your inbox — every single day. It combines Firecrawl's powerful content extraction, OpenAI's GPT formatting, and n8n's automation engine to turn raw HTML data into a digestible, human-readable email. Watch Full Tutorial on how to build this workflow here: https://www.youtube.com/watch?v=HChQSYsWbGo&t=947s&pp=0gcJCb4JAYcqIYzv 🔧 How It Works 🕒 Schedule Trigger Fires daily at a set hour (e.g., 6 PM) to begin the data pipeline. 🔥 Firecrawl Extract API (POST) Targets the Quiver Quantitative “Congress Trading” page and sends a structured prompt asking for all trades over $50K in the past month. ⏳ Wait Node Allows time for Firecrawl to finish processing before retrieving results. 📥 Firecrawl Get Result API (GET) Retrieves the extracted and structured data. 🧠 OpenAI Chat Model (GPT-4o) Formats the raw trading data into a readable summary that includes: Date of Transaction Stock/Asset traded Amount Congress member’s name and political party 📧 Gmail Node Sends the summary to your inbox with the subject “Congress Trade Updates - QQ”. 🧠 Why This is Useful Congressional trading activity often reveals valuable signals — especially when high-value trades are made. This workflow: Saves time manually tracking Quiver Quant updates Converts complex tables into a daily, readable email Keeps investors, researchers, and newsrooms in the loop — hands-free 🛠 Requirements Firecrawl API Key (with extract access) OpenAI API Key Gmail OAuth2 credentials n8n (self-hosted or cloud) 💬 Sample Output: Congress Trade Summary – May 21 Nancy Pelosi (D) sold TSLA for $85,000 on April 28 John Raynor (R) purchased AAPL worth $120,000 on May 2 ... and more 🪜 Setup Steps Add your Firecrawl, OpenAI, and Gmail credentials in n8n. Adjust the schedule node to your desired time. Customize the OpenAI system prompt if you want a different summary style. Deploy the workflow — and enjoy your daily edge.
Notify on new emails with invoices in Slack
This workflow checks for new emails in a mailbox and if the email body contains the word "invoice" it will send the attachment to Mindee. It then posts a message to Slack to let the team know a payment needs to be made, If the value of the invoice is over 1000 it will also email the finance manager. To use this workflow you will need to configure the IMAP node to select the correct mailbox to use then configure the Mindee node to use your credentials. Once that is done the Send Email node will need to be configured to use the correct mail server and to send to the correct people, The last thing to configure is the Slack node this will need your Slack credentials and the channel you want to post the message to.