Generate AI viral videos with Seedance and upload to TikTok, YouTube & Instagram
Generate AI videos with Seedance & Blotato, upload to TikTok, YouTube & Instagram Who is this for? This template is ideal for creators, content marketers, social media managers, and AI enthusiasts who want to automate the production of short-form, visually captivating videos for platforms like TikTok, YouTube Shorts, and Instagram Reels — all without manual editing or publishing. What problem is this workflow solving? Creating engaging videos requires: Generating creative ideas Writing detailed scene prompts Producing realistic video clips and sound effects Editing and stitching the final video Publishing across multiple platforms This workflow automates the entire process, saving hours of manual work and ensuring consistent, AI-driven content output ready for social distribution. What this workflow does This end-to-end AI video automation workflow: Generates a creative idea using OpenAI and LangChain Creates detailed video prompts with Seedance AI Generates video clips via Wavespeed AI Generates sound effects with Fal AI Stitches the final video using Fal AI’s ffmpeg API Logs metadata and video links to Google Sheets Uploads the video to Blotato Auto-publishes to TikTok, YouTube, Instagram, and other platforms Setup Add your OpenAI API key in the LLM nodes Set up Seedance and Wavespeed AI credentials for video prompt and clip generation Add your Fal AI API key for sound and stitching steps Connect your Google Sheets account for tracking ideas and outputs Set your Blotato API key and fill in the platform account IDs in the Assign Social Media IDs node Adjust the Schedule Trigger to control when the automation runs How to customize this workflow to your needs Change the AI prompts to target your niche (e.g., ASMR, product videos, humor) Add a Telegram or Slack step for video preview before publishing Tweak scene structure or video duration to match your style Disable platforms you don’t want by turning off specific HTTP Request nodes Edit the sound generation prompts for different moods or effects 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube
AI-powered WhatsApp chatbot 🤖📲 for text, voice, images & PDFs with memory 🧠
This workflow is a highly advanced multimodal AI assistant designed to operate through WhatsApp. It can understand and respond to text, images, voice messages, and PDF documents by combining OpenAI models with smart logic to adapt to the content received. --- 🎯 Core Features 📥 1. Automatic Message Type Detection Using the Input type node, the bot detects whether the user has sent: Text Voice messages Images Files (PDF) Other unsupported content 💬 2. Smart Text Message Handling Text messages are processed by an OpenAI GPT-4o-mini agent with a customized system prompt. Replies are concise, accurate, and formatted for mobile readability. 🖼️ 3. Image Analysis & Description Images are downloaded, converted to base64, and analyzed by an image-aware AI model. The output is a rich, structured description, designed for visually impaired users or visual content interpretation. 🎙️ 4. Voice Message Transcription & Reply Audio messages are downloaded and transcribed using OpenAI Whisper. The transcribed text is analyzed and answered by the AI. Optionally, the AI reply can be converted back to voice using OpenAI's text-to-speech, and sent as an audio message. 📄 5. PDF Document Extraction & Summary Only PDFs are allowed (filtered via MIME type). The document’s content is extracted and combined with the user's message. The AI then provides a relevant summary or answer. 🧠 6. Contextual Memory Each user has a personalized session ID with a memory window of 10 interactions. This ensures a more natural and contextual conversation flow. --- How It Works Thisworkflow is designed to handle incoming WhatsApp messages and process different types of inputs (text, audio, images, and PDF documents) using AI-powered analysis. Here’s how it functions: Trigger: The workflow starts with the WhatsApp Trigger node, which listens for incoming messages (text, audio, images, or documents). Input Routing: The Input type (Switch node) checks the message type and routes it to the appropriate processing branch: Text: Directly forwards the message to the AI agent for response generation. Audio: Downloads the audio file, transcribes it using OpenAI, and sends the transcription to the AI agent. Image: Downloads the image, analyzes it with OpenAI’s GPT-4 model, and generates a detailed description. PDF Document: Downloads the file, extracts text, and processes it with the AI agent. Unsupported Formats: Sends an error message if the input is not supported. AI Processing: The AI Agent1 node, powered by OpenAI, processes the input (text, transcribed audio, image description, or PDF content) and generates a response. Response Handling: For audio inputs, the AI’s response is converted back into speech (using OpenAI’s TTS) and sent as a voice message. For other inputs, the response is sent as a text message via WhatsApp. Memory: The Simple Memory node maintains conversation context for follow-up interactions. Setup Steps To deploy this workflow in n8n, follow these steps: Configure WhatsApp API Credentials: Set up WhatsApp Business API credentials (Meta Developer Account). Add the credentials in the WhatsApp Trigger, Get Image/Audio/File URL, and Send Message nodes. Set Up OpenAI Integration: Provide an OpenAI API key in the Analyze Image, Transcribe Audio, Generate Audio Response, and AI Agent1 nodes. Adjust Input Handling (Optional): Modify the Switch node ("Input type") to handle additional message types if needed. Update the "Only PDF File" IF node to support other document formats. Test & Deploy: Activate the workflow and test with different message types (text, audio, image, PDF). Ensure responses are correctly generated and sent back via WhatsApp. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
AI research assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI)
AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI) 👥 Who’s it for This workflow is perfect for anyone who wants to receive AI-powered research summaries directly on Telegram. Ideal for people asking frequent product, tech, or decision-making questions and want up-to-date answers sourced from the web. 🤖 What it does Users send a question via Telegram. An AI agent (DeepSeek R1) reformulates and understands the intent, while a second agent (GPT-4o mini) performs live research using SerpAPI. The most relevant answers, including links and images, are delivered back via Telegram. ⚙️ How it works 📲 Telegram Trigger – Starts when a user sends a message to your Telegram bot. 🧠 DeepSeek R1 Agent – Understands, clarifies, or reformulates the user query. 🧠 Research AI Agent (GPT-4o mini + SerpAPI) – Searches the web and summarizes the best results. 📤 Send Telegram Message – Sends the response back to the same user. 📋 Requirements Telegram bot (via BotFather) with API token set in n8n credentials OpenAI account with API key and balance for GPT-4o mini SerpAPI account (100 free searches/month) with API key DeepSeek account with API key and balance 🛠️ How to set up Create your Telegram bot using BotFather and connect it using the Telegram Trigger node Set up DeepSeek credentials and add a Chat Model AI Agent node using DeepSeek R1 to reformulate the user’s question Set up OpenAI credentials and add a second ChatGPT AI Agent node using GPT-4o mini In the GPT-4o node, enable the SerpAPI Tool and add your SerpAPI API key Pass the reformulated question from DeepSeek to the GPT-4o agent for live search and summarization Format the response (text, links, optional images) Send the final reply to the user using the Telegram Send Message node Ensure your n8n instance is publicly accessible Test the workflow by sending a message to your Telegram bot ✅
From Google Drive to Instagram, TikTok & YouTube with AI descriptions & Airtable tracking
Description This automation template is designed for content creators, social media managers, and influencers who want to streamline their video publishing workflow. It automatically detects new videos uploaded to a specific Google Drive folder, generates AI-powered descriptions based on video audio content, and simultaneously publishes them across Instagram, TikTok, and YouTube while tracking everything in Airtable. Note: This workflow uses upload-post.com API (free trial no credit card required) for multi-platform video distribution and requires API tokens for each service. The AI-generated descriptions are created using OpenAI's transcription and chat models to analyze video audio content. Who Is This For? Content Creators & Influencers: Automatically publish your videos across all major social platforms without manual work. Social Media Managers: Maintain consistent posting schedules across multiple platforms with AI-generated, platform-optimized descriptions. Marketing Teams: Scale video content distribution with automated workflows that include tracking and status monitoring. Video Producers: Focus on creating content while the system handles the tedious task of multi-platform publishing and description generation. What Problem Does This Workflow Solve? Publishing the same video content across Instagram, TikTok, and YouTube is time-consuming and repetitive. You need to manually upload each video, write unique descriptions, and track publication status. This workflow addresses these challenges by: Automated Video Distribution: Detects new videos in Google Drive and automatically uploads them to all three platforms simultaneously. AI-Powered Content Generation: Uses OpenAI to transcribe video audio and generate engaging, platform-appropriate descriptions automatically. Centralized Tracking: Maintains detailed records in Airtable including upload status, URLs, and metadata for each platform. Error Monitoring: Provides real-time error notifications via Telegram to ensure you're always aware of any issues. How It Works Video Upload Detection: The workflow monitors a specific Google Drive folder for new video uploads using automated triggers. Content Analysis: Downloads the video, extracts audio, and uses OpenAI to transcribe and generate compelling descriptions. Airtable Integration: Creates and updates records to track video metadata, descriptions, and publication status. Multi-Platform Publishing: Simultaneously uploads the video to Instagram, TikTok, and YouTube using the upload-post.com API. Status Tracking: Updates Airtable records with publication status and platform-specific URLs for each successful upload. Setup Google Drive Configuration: Set up the Google Drive trigger to monitor your specific folder Configure OAuth2 credentials for Google Drive access OpenAI Integration: Add your OpenAI API key to enable audio transcription and description generation Airtable Setup: Create an Airtable base with fields for Video Name, Description, Platform Status, URLs, and Upload Date Add your Airtable API token and configure base/table IDs in the "Set Variables" node Upload-Post.com Account: Create an account at upload-post.com to get your API token Configure the token in the HTTP request nodes for each platform Set your user ID in the variables section Platform Accounts: Ensure your Instagram, TikTok, and YouTube accounts are connected to upload-post.com Error Notifications: (Optional) Configure Telegram bot credentials for error notifications Requirements Accounts: Google Drive, OpenAI, Airtable, upload-post.com, Telegram (optional) API Keys & Credentials: Google Drive OAuth2, OpenAI API Key, Airtable API Token, upload-post.com API Token Platform Setup: Instagram, TikTok, and YouTube accounts connected to upload-post.com Transform your video publishing workflow from hours of manual work to a fully automated system that handles everything from content analysis to multi-platform distribution and tracking.
Create a pizza ordering chatbot with GPT-3.5 - Menu, orders & status tracking
Pizza Ordering Chatbot with OpenAI - Menu, Orders & Status Tracking Introduction This workflow template is designed to automate order processing for a pizza store using OpenAI and n8n. The chatbot acts as a virtual assistant to handle customer inquiries related to menu details, order placement, and order status tracking. Features The chatbot provides an interactive experience for customers by performing the following functions: Menu Inquiry: When a customer asks about the menu, the chatbot responds with a list of available pizzas, prices, and additional options. Order Placement: If a customer places an order, the chatbot confirms order details, provides a summary, informs the customer that the order is being processed, and expresses gratitude. Order Status Tracking: If a customer asks about their order status, the chatbot retrieves details such as order date, pizza type, and quantity, providing real-time updates. Prerequisites Before setting up the workflow, ensure you have the following: OpenAI account (Sign up here) OpenAI API key to interact with GPT-3.5 n8n instance running locally or on a server (Installation Guide) Configuration Steps Step 1: Set Up OpenAI API Credentials Log in to OpenAI's website. Navigate to API Keys under your account settings. Click Create API Key and copy the key for later use. Step 2: Configure OpenAI Node in n8n Open n8n and create a new workflow. Click Add Node and search for OpenAI. Select OpenAI from the list. In the OpenAI node settings, click "Create New" under the Credentials section. Enter a name for the credentials (e.g., "PizzaBot OpenAI Key"). Paste your API Key into the field. Click Save. Step 3: Set Up the Chatbot Logic Connect the AI Agent Builder Node to the OpenAI Node and HTTP Request Node. Configure the OpenAI Node with the following settings: Model: gpt-3.5-turbo Prompt: Provide dynamic text based on customer inquiries (e.g., "List available pizzas," "Place an order for Margherita pizza," "Check my order status"). Temperature: Adjust based on desired creativity (recommended: 0.7). Max Tokens: Limit response length (recommended: 150). Add multiple HTTP Request Node: For Get Products: Fetch stored menu data and return details. For Order Product: Capture order details, generate an order ID, and confirm with the customer. For Get Order: Retrieve order details based on the order ID and display progress. Step 4: Testing and Deployment Click Execute Workflow to test the chatbot. Open the Chat Message node, then copy the chat URL to access the chatbot in your browser. Interact with the chatbot by asking different queries (e.g., "What pizzas do you have?" or "I want to order a Pepperoni pizza"). Verify responses and adjust prompts or configurations as needed. Deploy the workflow and integrate it with a messaging platform (e.g., Telegram, WhatsApp, or a website chatbot). Conclusion This n8n workflow enables a fully functional pizza ordering chatbot using OpenAI's GPT-3.5. Customers can view menus, place orders, and track their order status efficiently. You can further customize the chatbot by refining prompts, adding new features, or integrating with external databases for order management. 🚀 Happy automating!
Add data from a photo to Google Sheets
Automatically adding expense receipts to Google Sheets with Telegram, Mindee API, Twilio, and n8n.
Create Atlassian Confluence page from template
How it works creates a new page in Confluence based on a page template also defined in Confluence replaces any number of placeholders with data from your workflow generic implementation for maximum flexibility Set up steps All parameters you need to change are defined in the Set node Set your Atlassian-domain Set the template id you want to use as the basis for new pages Set the target space and parent page for new pages added based on that template. 🎥 Explainer video has all the details. =) Feedback Any feedback is welcome. If you have ideas for improvements, let me know.
Generate text images from the Free DummyJSON API using the HTTP request node
Who is this for? This workflow template is ideal for marketers, designers, content creators, and developers who need to generate custom text-based images dynamically. Whether you want to create social media graphics, placeholder images, or text-based LinkedIn carousels, this workflow provides a simple, no-code solution using an API that requires no authentication. What problem does this workflow solve? Creating text-based images often requires design software or complex integrations with graphic tools. This workflow eliminates that hassle by allowing users to generate images with custom text, font styles, colors, and background colors using a simple HTTP request. It’s perfect for automating image generation without relying on external tools or manual effort. What this workflow does This workflow leverages an HTTP request to a free API that generates text-based images dynamically. Here's what it enables you to do: Define custom image text Set image dimensions (width x height) Choose a background color and text color using hex codes Select a font family and font size Specify the image format (PNG, JPG, or WebP) The generated image can be used immediately, making it ideal for automating content creation workflows. Setup Open the workflow in n8n. Modify the Set node to define your preferred image properties: text: The message displayed on the image. size: Image dimensions (e.g., 500x300 pixels). backgroundColor: Hex color code for the background. textColor: Hex color code for the text. fontFamily: Select from available font options (e.g., Pacifico, Ubuntu). fontSize: Define the text size. type: Choose the image format (PNG, JPG, or WebP). Execute the workflow to generate an image. The HTTP request returns the generated image, ready for use. How to customize this workflow Adjust the Set node values to match your desired design. Use dynamic data for text, allowing personalized images based on user input. Automate image delivery by adding email or social media posting nodes. Integrate this workflow into larger automation sequences, such as content marketing pipelines.
Cron routines with Telegram
Executes schedule routines, and triggers alerts via telegram
Add a task to Google Tasks
No description available.
Automate AI video ad generation with Google Veo 3, Gemini, and Airtable
This n8n template from Intuz provides a complete and automated solution for transforming a static product image and a creative idea into a dynamic, AI-generated video ad. Using Google's state-of-the-art Veo 3 model, this workflow manages the entire creative process from concept to a final, downloadable video file. Who's this workflow for? E-commerce Brands & Marketers Advertising Agencies Social Media Content Creators Product Managers How it works Submit a Creative Brief: The workflow starts when a user submits a creative idea via a simple web form (e.g., "A Pepsi can exploding into a vibrant disco party"). Upload a Product Image: The user is then prompted to upload a corresponding image (e.g., a high-quality photo of the Pepsi can). Log the Project in Airtable: The idea and the uploaded image are saved to an Airtable base, which acts as the central tracking system for all video generation projects. AI Creative Analysis: Google Gemini analyzes both the user's text prompt and the uploaded image. It acts as an "AI Creative Director," generating a detailed video brief that reinterprets the static image according to the user's creative vision. Generate Video with Veo 3: The detailed creative brief is sent to Google's Veo 3 AI video generation model. The workflow initiates a long-running task to create the video. Retrieve the Final Video: After a brief waiting period, the workflow polls the Veo 3 API to retrieve the finished video, converts it into a binary file, and makes it available for download directly from the n8n execution log. Key Requirements to Use This Template n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. Google Cloud Account: A Google Cloud Project with the Vertex AI API enabled. You must have access to both the Gemini and Veo 3 models within your project. You will need a Gemini API Key and a Google OAuth2 Credential configured for the Vertex AI scope. Airtable Account: An Airtable base with a table set up to track the video projects. It should have columns for Image Prompt, Image (Attachment), Video (Attachment/URL), and Status. Setup Instructions Airtable Configuration (Crucial): In the Create a record, Get a record, and Update record nodes, connect your Airtable credentials and update the Base ID and Table ID to match your setup. In the Uploading Image in Airtable (HTTP Request) node, you must edit the URL and the "Authorization" header to include your Base ID, Table ID, and Personal Access Token. Google AI Configuration (Gemini & Veo): In the Analyze image (Google Gemini) node, select your Gemini API credentials. In both the Generate Video Veo 3 and Get the the Video (HTTP Request) nodes: You must replace [Project ID] and [Location] in the URLs with your own Google Cloud Project ID and region (e.g., us-central1). Select your Google OAuth2 credentials for authentication. Customize Video Parameters (Optional): In the Parse Request (Code) node, you can modify the JavaScript code to change video generation settings like aspectRatio, durationSeconds, and resolution. Execute the Workflow: Activate the workflow. Open the Form URL from the Prompt your Idea node to start the process. Sample Videos Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
Fetch a YouTube playlist and send new items Raindrop
This simple workflow will fetch a YouTube playlist every n minutes and send the new items s to a collection in Raindrop. You can connect any application at the end of the flow. Make sure to authenticate to YouTube using Google Auth, and to Raindrop using an API. Update the Playlist ID and the Raindrop collection.