Generate cheap viral AI videos to TikTok with Google Veo3 fast and Postiz
This workflow contains community nodes that are only compatible with the self-hosted version of n8n.
This workflow automates the entire process of generating short AI videos using Google Veo3 Fast, enhancing them with SEO-optimized titles, and uploading them directly to TikTok via Postiz, all triggered from a central Google Sheet.
This setup ensures a seamless pipeline from video creation to TikTok upload, with minimal manual intervention.
Benefits
- Full automation from prompt input to social media publishing.
- Cheaper video generation using Veo3 Fast vs traditional AI video tools.
- Centralized management through Google Sheets – no coding required for end users.
- SEO-enhanced titles with GPT-4o to boost engagement.
- Scheduled or manual triggering, perfect for batch operations.
- No manual uploads – integration with Postiz means content is published hands-free.
How It Works
This workflow automates the process of generating AI videos using Google Veo3 Fast, saving them to Google Drive, and uploading them to TikTok via Postiz. Here’s how it functions:
- Trigger: The workflow can be started manually or scheduled (e.g., every 5 minutes) to check for new video requests in a Google Sheet.
- Video Generation:
- The workflow retrieves a video prompt and duration from the Google Sheet.
- It sends the prompt to Google Veo3 Fast via the Fal.ai API to generate the video.
- The system periodically checks the video generation status until it’s completed.
- Post-Processing:
- Once the video is ready, it is downloaded and uploaded to Google Drive.
- A YouTube-optimized title is generated using GPT-4o Mini based on the video prompt.
- TikTok Upload:
- The video is uploaded to Postiz, a social media scheduling tool.
- Postiz then publishes the video to the connected TikTok account with the generated title.
- Tracking: The Google Sheet is updated with the video URL for record-keeping.
Set Up Steps
To configure this workflow, follow these steps:
-
Prepare the Google Sheet:
- Create a Google Sheet with columns:
- PROMPT: Description of the video.
- DURATION: Length of the video.
- VIDEO: (Leave empty, auto-filled by the workflow).
- Create a Google Sheet with columns:
-
Obtain API Keys:
- Sign up at Fal.ai to get an API key for Google Veo3 Fast.
- Replace
YOURAPIKEYin the "Create Video" node’s HTTP header (Authorization: Key YOURAPIKEY).
-
Configure Postiz for TikTok:
- Create a Postiz account (free trial available).
- Connect your TikTok account in Postiz and note the Channel ID.
- Replace
XXXin the "TikTok" node with your TikTok Channel ID. - Set the Postiz API key in the "Upload Video to Postiz" node.
-
Set Up Google Drive & Sheets Access:
- Ensure the workflow has OAuth access to:
- Google Sheets (to read/write video data).
- Google Drive (to store generated videos).
- Ensure the workflow has OAuth access to:
-
Schedule or Run Manually:
- The workflow can be triggered manually or scheduled (e.g., every 5 minutes) to process new video requests.
Note: This workflow requires self-hosted n8n due to community node dependencies.
Need help customizing?
Contact me for consulting and support or add me on Linkedin.
n8n Workflow: Generate Viral AI Videos for TikTok with Google Veo3 and Postiz
This n8n workflow is designed to automate the process of generating content, potentially for creating "cheap viral AI videos for TikTok," by leveraging Google Sheets and OpenAI. It includes logic to handle different content generation scenarios and prepare data for further processing.
What it does
This workflow automates the following steps:
- Manual or Scheduled Trigger: The workflow can be initiated manually or on a predefined schedule.
- Retrieve Data from Google Sheets: It reads data from a specified Google Sheet, likely containing prompts, topics, or other parameters for content generation.
- Conditional Logic (If): It evaluates the retrieved data based on a condition.
- If True Path: If the condition is met, it proceeds to use OpenAI for content generation.
- OpenAI Content Generation: Interacts with the OpenAI API to generate text or other AI-powered content.
- Edit Fields (Set): Transforms or sets specific fields in the data, likely preparing the generated content for the next steps.
- Google Drive Action: Performs an action related to Google Drive, which could involve storing generated content, retrieving assets, or other file management.
- If False Path: If the condition is not met, it introduces a
Waitperiod.- Wait: Pauses the workflow for a specified duration, possibly to retry or await external changes.
- HTTP Request: Makes an HTTP request, which could be used to fetch additional data, trigger another service, or post a notification.
- If True Path: If the condition is met, it proceeds to use OpenAI for content generation.
- Sticky Note: A sticky note is present in the workflow, likely for documentation or temporary notes.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running n8n instance.
- Google Sheets Account: Configured credentials for Google Sheets to read data.
- OpenAI API Key: Credentials for OpenAI to generate content.
- Google Drive Account: Configured credentials for Google Drive for file operations.
- HTTP Request Endpoint (Optional): If the "HTTP Request" node is used, you'll need the URL and any necessary authentication for the target API.
Setup/Usage
- Import the Workflow: Import the provided JSON into your n8n instance.
- Configure Credentials:
- Set up your Google Sheets credentials.
- Set up your OpenAI credentials.
- Set up your Google Drive credentials.
- Configure Nodes:
- Google Sheets: Specify the spreadsheet ID and sheet name from which to read data.
- If: Adjust the condition in the "If" node according to your specific logic for content generation.
- OpenAI: Configure the OpenAI model, prompt, and any other parameters for content generation.
- Edit Fields (Set): Modify the fields being set or transformed as needed.
- Google Drive: Configure the operation (e.g., upload file, create folder) and relevant parameters.
- Wait: Adjust the wait duration if the "If" condition evaluates to false.
- HTTP Request: Configure the URL, method, headers, and body for the HTTP request.
- Activate the Workflow: Enable the workflow.
- Execute:
- Manual Trigger: Click "Execute Workflow" in the n8n UI.
- Schedule Trigger: The workflow will run automatically based on the configured schedule.
Related Templates
Track free Udemy courses automatically with RapidAPI and Google Sheets
This workflow fetches free Udemy courses hourly via the Udemy Coupons and Courses API on RapidAPI, filters them, and updates a Google Sheet. It sends alerts on errors for smooth monitoring. --- Node-by-Node Explanation Schedule Trigger: Runs the workflow every hour automatically. Fetch Udemy Coupons: Sends POST request to the Udemy Coupons and Courses API on RapidAPI to get featured courses. Check API Success: Verifies if the API response is successful; routes accordingly. Filter Free Courses: Selects only courses with sale_price of zero (free courses). Send Error Notification: Emails admin if API fetch fails for quick action. Sync Courses to Google Sheet: Appends or updates the filtered free courses into Google Sheets. --- Google Sheets Columns id name price sale_price image lectures views rating language category subcategory slug store sale_start --- Google Sheets Setup & Configuration Steps Create Google Sheet: Create or open a Google Sheet where you want to sync courses. Set Headers: Add columns headers matching the fields synced (id, name, price, etc.). Enable Google Sheets API: Go to Google Cloud Console, enable Google Sheets API for your project. Create Service Account: In Google Cloud Console, create a Service Account with editor access. Download Credentials: Download the JSON credentials file from the service account. Share Sheet: Share your Google Sheet with the Service Account email (found in JSON file). Configure n8n Google Sheets Node: Use the service account credentials, set operation to “Append or Update”, provide Sheet URL and sheet name or gid. Match Columns: Map the course fields to your sheet columns and set id as the unique key for updates. --- How to Obtain RapidAPI Key & Setup API Request Sign up/Login: Visit RapidAPI Udemy Coupons and Courses API and create an account or log in. Subscribe to API: Subscribe to the Udemy Coupons and Courses API plan (free or paid). Get API Key: Navigate to your dashboard and copy your x-rapidapi-key. Configure HTTP Request: In your workflow’s HTTP Request node: Set method to POST. URL: https://udemy-coupons-and-courses.p.rapidapi.com/featured.php Add headers: x-rapidapi-host: udemy-coupons-and-courses.p.rapidapi.com x-rapidapi-key: your copied API key Set content type to multipart/form-data. Add body parameter: page=1 (or as needed). Test API: Run the node to ensure the API responds with data successfully before continuing workflow setup. --- Use Cases & Benefits Automates daily updates of free Udemy courses in your sheet using the Udemy Coupons and Courses API on RapidAPI. Saves manual effort in tracking coupons and deals. Enables quick error alerts to maintain data accuracy. Ideal for course aggregators, affiliate marketers, or learning platforms needing fresh course data. --- Who This Workflow Is For Content curators and edtech platforms tracking free courses. Affiliate marketers promoting Udemy deals. Anyone needing real-time access to updated free Udemy coupons.
Generate AI images in bulk with Freepik, Google Sheets & Drive
This n8n workflow automates bulk AI image generation using Freepik's Text-to-Image API. It reads prompts from a Google Sheet, generates multiple variations of each image using Freepik's AI, and automatically uploads the results to Google Drive with organized file names. This is perfect for content creators, marketers, or designers who need to generate multiple AI images in bulk and store them systematically. Key Features: Bulk image generation from Google Sheets prompts Multiple variations per prompt (configurable duplicates) Automatic file naming and organization Direct upload to Google Drive Batch processing for efficient API usage Freepik AI-powered image generation Step-by-Step Implementation Guide Prerequisites Before setting up this workflow, you'll need: n8n instance (cloud or self-hosted) Freepik API account with Text-to-Image access Google account with access to Sheets and Drive Google Sheet with your prompts Step 1: Set Up Freepik API Credentials Go to Freepik API Developer Portal Create an account or sign in Navigate to your API dashboard Generate an API key for Text-to-Image service Copy the API key and save it securely In n8n, go to Credentials → Add Credential → HTTP Header Auth Configure as follows: Name: "Header Auth account" Header Name: x-freepik-api-key Header Value: Your Freepik API key Step 2: Set Up Google Credentials Google Sheets Access: Go to Google Cloud Console Create a new project or select existing one Enable Google Sheets API Create OAuth2 credentials In n8n, go to Credentials → Add Credential → Google Sheets OAuth2 API Enter your OAuth2 credentials and authorize with spreadsheets.readonly scope Google Drive Access: In Google Cloud Console, enable Google Drive API In n8n, go to Credentials → Add Credential → Google Drive OAuth2 API Enter your OAuth2 credentials and authorize Step 3: Create Your Google Sheet Create a new Google Sheet: sheets.google.com Set up your sheet with these columns: Column A: Prompt (your image generation prompts) Column B: Name (identifier for file naming) Example data: | Prompt | Name | |-------------------------------------------|-------------| | A serene mountain landscape at sunrise | mountain-01 | | Modern office space with natural lighting | office-02 | | Cozy coffee shop interior | cafe-03 | Copy the Sheet ID from the URL (the long string between /d/ and /edit) Step 4: Set Up Google Drive Folder Create a folder in Google Drive for your generated images Copy the Folder ID from the URL when viewing the folder Note: The workflow is configured to use a folder called "n8n workflows" Step 5: Import and Configure the Workflow Copy the provided workflow JSON In n8n, click Import from File or Import from Clipboard Paste the workflow JSON Configure each node as detailed below: Node Configuration Details: Start Workflow (Manual Trigger) No configuration needed Used to manually start the workflow Get Prompt from Google Sheet (Google Sheets) Document ID: Your Google Sheet ID (from Step 3) Sheet Name: Sheet1 (or your sheet name) Operation: Read Credentials: Select your "Google Sheets account" Double Output (Code Node) Purpose: Creates multiple variations of each prompt JavaScript Code: javascript const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, ]; Customization: Add more runs for additional variations Loop (Split in Batches) Processes items in batches to manage API rate limits Options: Keep default settings Reset: false Create Image (HTTP Request) Method: POST URL: https://api.freepik.com/v1/ai/text-to-image Authentication: Generic → HTTP Header Auth Credentials: Select your "Header Auth account" Send Body: true Body Parameters: Name: prompt Value: ={{ $json.Prompt }} Split Responses (Split Out) Field to Split Out: data Purpose: Separates multiple images from API response Convert to File (Convert to File) Operation: toBinary Source Property: base64 Purpose: Converts base64 image data to file format Upload Image to Google Drive (Google Drive) Operation: Upload Name: =Image - {{ $('Get Prompt from Google Sheet').item.json.Name }} - {{ $('Double Output').item.json.run }} Drive ID: My Drive Folder ID: Your Google Drive folder ID (from Step 4) Credentials: Select your "Google Drive account" Step 6: Customize for Your Use Case Modify Duplicate Count: Edit the "Double Output" code to create more variations Update File Naming: Change the naming pattern in the Google Drive upload node Adjust Batch Size: Modify the Loop node settings for your API limits Add Image Parameters: Enhance the HTTP request with additional Freepik parameters (size, style, etc.) Step 7: Test the Workflow Ensure your Google Sheet has test data Click Execute Workflow on the manual trigger Monitor the execution flow Check that images are generated and uploaded to Google Drive Verify file names match your expected pattern Step 8: Production Deployment Set up error handling for API failures Configure appropriate batch sizes based on your Freepik API limits Add logging for successful uploads Consider webhook triggers for automated execution Set up monitoring for failed executions Freepik API Parameters Basic Parameters: prompt: Your text description (required) negative_prompt: What to avoid in the image guidance_scale: How closely to follow the prompt (1-20) numinferencesteps: Quality vs speed trade-off (20-100) seed: For reproducible results Example Enhanced Body: json { "prompt": "{{ $json.Prompt }}", "negative_prompt": "blurry, low quality", "guidance_scale": 7.5, "numinferencesteps": 50, "num_images": 1 } Workflow Flow Summary Start → Manual trigger initiates the workflow Read Sheet → Gets prompts and names from Google Sheets Duplicate → Creates multiple runs for variations Loop → Processes items in batches Generate → Freepik API creates images from prompts Split → Separates multiple images from response Convert → Transforms base64 to binary file format Upload → Saves images to Google Drive with organized names Complete → Returns to loop for next batch Contact Information Robert A Ynteractive For support, customization, or questions about this workflow: 📧 Email: rbreen@ynteractive.com 🌐 Website: https://ynteractive.com/ 💼 LinkedIn: https://www.linkedin.com/in/robert-breen-29429625/ Need help implementing this workflow or want custom automation solutions? Get in touch for professional n8n consulting and workflow development services.
Visual regression testing with Apify and AI Vision Model
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.