Automate AI video ad generation with Google Veo 3, Gemini, and Airtable
This n8n template from Intuz provides a complete and automated solution for transforming a static product image and a creative idea into a dynamic, AI-generated video ad.
Using Google's state-of-the-art Veo 3 model, this workflow manages the entire creative process from concept to a final, downloadable video file.
Who's this workflow for?
- E-commerce Brands & Marketers
- Advertising Agencies
- Social Media Content Creators
- Product Managers
How it works
1. Submit a Creative Brief: The workflow starts when a user submits a creative idea via a simple web form (e.g., "A Pepsi can exploding into a vibrant disco party").
2. Upload a Product Image: The user is then prompted to upload a corresponding image (e.g., a high-quality photo of the Pepsi can).
3. Log the Project in Airtable: The idea and the uploaded image are saved to an Airtable base, which acts as the central tracking system for all video generation projects.
4. AI Creative Analysis: Google Gemini analyzes both the user's text prompt and the uploaded image. It acts as an "AI Creative Director," generating a detailed video brief that reinterprets the static image according to the user's creative vision.
5. Generate Video with Veo 3: The detailed creative brief is sent to Google's Veo 3 AI video generation model. The workflow initiates a long-running task to create the video.
6. Retrieve the Final Video: After a brief waiting period, the workflow polls the Veo 3 API to retrieve the finished video, converts it into a binary file, and makes it available for download directly from the n8n execution log.
Key Requirements to Use This Template
n8n Instance & Required Nodes:
- An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain).
- If you are using a self-hosted version of n8n, please ensure this package is installed.
Google Cloud Account:
- A Google Cloud Project with the Vertex AI API enabled.
- You must have access to both the Gemini and Veo 3 models within your project.
- You will need a Gemini API Key and a Google OAuth2 Credential configured for the Vertex AI scope.
Airtable Account:
- An Airtable base with a table set up to track the video projects. It should have columns for Image Prompt, Image (Attachment), Video (Attachment/URL), and Status.
Setup Instructions
1. Airtable Configuration (Crucial):
- In the Create a record, Get a record, and Update record nodes, connect your Airtable credentials and update the Base ID and Table ID to match your setup.
- In the Uploading Image in Airtable (HTTP Request) node, you must edit the URL and the "Authorization" header to include your Base ID, Table ID, and Personal Access Token.
2. Google AI Configuration (Gemini & Veo):
-
In the Analyze image (Google Gemini) node, select your Gemini API credentials.
-
In both the Generate Video Veo 3 and Get the the Video (HTTP Request) nodes:
-
You must replace [Project ID] and [Location] in the URLs with your own Google Cloud Project ID and region (e.g., us-central1).
-
Select your Google OAuth2 credentials for authentication.
3. Customize Video Parameters (Optional):
- In the Parse Request (Code) node, you can modify the JavaScript code to change video generation settings like aspectRatio, durationSeconds, and resolution.
4. Execute the Workflow:
- Activate the workflow. Open the Form URL from the Prompt your Idea node to start the process.
Sample Videos
Connect with us
- Website: https://www.intuz.com/services
- Email: getstarted@intuz.com
- LinkedIn: https://www.linkedin.com/company/intuz
- Get Started: https://n8n.partnerlinks.io/intuz
For Custom Workflow Automation
Click here- Get Started
n8n AI Video Ad Generation Workflow
This n8n workflow automates the generation of AI-powered video ads by integrating with Google Gemini and Airtable. It provides a structured way to input ad parameters, generate video scripts and voiceovers, and then create the final video ad.
What it does
This workflow simplifies the process of creating video ads through a series of automated steps:
- Triggers on Form Submission: The workflow starts when a new ad request is submitted via an n8n form.
- Generates Video Script with Google Gemini: It uses the submitted ad details to prompt Google Gemini to generate a creative video script.
- Generates Voiceover with Google Gemini: Based on the generated script, Google Gemini then creates a voiceover for the video ad.
- Creates Video Ad (Placeholder): A placeholder HTTP Request node indicates where an external service (like Google Veo 3, not explicitly configured in this JSON) would be called to generate the actual video ad using the script and voiceover.
- Stores Ad Details in Airtable: The final ad details, including the generated script, voiceover, and potentially a link to the generated video, are saved to an Airtable base.
- Handles Binary Data: The workflow includes a "Convert to File" node, suggesting it might handle binary data such as the generated voiceover or video file.
- Includes Wait and Code Nodes: A "Wait" node is present, likely for managing API rate limits or allowing time for external processing. A "Code" node provides flexibility for custom logic or data manipulation.
- Sticky Notes for Documentation: The workflow uses sticky notes for internal documentation, indicating areas for further development or explanation.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running n8n instance.
- Airtable Account: With an API key and a configured base/table to store ad data.
- Google Gemini API Key: For generating video scripts and voiceovers.
- Video Generation Service (e.g., Google Veo 3): An API endpoint or integration for a service capable of generating videos from scripts and voiceovers. (Note: This is represented by a generic HTTP Request node in the current JSON and needs to be configured).
Setup/Usage
- Import the Workflow: Download the provided JSON and import it into your n8n instance.
- Configure Credentials:
- Airtable: Set up your Airtable credentials (API Key) in n8n.
- Google Gemini: Configure your Google Gemini credentials (API Key) in n8n.
- Configure the n8n Form Trigger: Customize the form fields to collect necessary ad generation parameters (e.g., product name, target audience, ad length, style).
- Configure Google Gemini Nodes:
- Generate Video Script: Adjust the prompt in the "Google Gemini" node to guide the script generation based on your form inputs.
- Generate Voiceover: Ensure the input for the second "Google Gemini" node correctly uses the script generated in the previous step.
- Configure HTTP Request (Video Generation):
- Edit the "HTTP Request" node to connect to your chosen video generation service (e.g., Google Veo 3).
- Provide the necessary API endpoint, authentication, and payload (including the generated script and voiceover).
- Configure Airtable Node:
- Select your Airtable base and table.
- Map the output fields from the video generation and Gemini nodes to the appropriate columns in your Airtable table (e.g.,
Video Script,Voiceover URL,Video Ad URL).
- Activate the Workflow: Once configured, activate the workflow to start automating your AI video ad generation.
- Test the Workflow: Submit a test entry through the n8n form to ensure all steps execute correctly and data is stored as expected.
Related Templates
Automate Reddit brand monitoring & responses with GPT-4o-mini, Sheets & Slack
How it Works This workflow automates intelligent Reddit marketing by monitoring brand mentions, analyzing sentiment with AI, and engaging authentically with communities. Every 24 hours, the system searches Reddit for posts containing your configured brand keywords across all subreddits, finding up to 50 of the newest mentions to analyze. Each discovered post is sent to OpenAI's GPT-4o-mini model for comprehensive analysis. The AI evaluates sentiment (positive/neutral/negative), assigns an engagement score (0-100), determines relevance to your brand, and generates contextual, helpful responses that add genuine value to the conversation. It also classifies the response type (educational/supportive/promotional) and provides reasoning for whether engagement is appropriate. The workflow intelligently filters posts using a multi-criteria system: only posts that are relevant to your brand, score above 60 in engagement quality, and warrant a response type other than "pass" proceed to engagement. This prevents spam and ensures every interaction is meaningful. Selected posts are processed one at a time through a loop to respect Reddit's rate limits. For each worthy post, the AI-generated comment is posted, and complete interaction data is logged to Google Sheets including timestamp, post details, sentiment, engagement scores, and success status. This creates a permanent audit trail and analytics database. At the end of each run, the workflow aggregates all data into a comprehensive daily summary report with total posts analyzed, comments posted, engagement rate, sentiment breakdown, and the top 5 engagement opportunities ranked by score. This report is automatically sent to Slack with formatted metrics, giving your team instant visibility into your Reddit marketing performance. --- Who is this for? Brand managers and marketing teams needing automated social listening and engagement on Reddit Community managers responsible for authentic brand presence across multiple subreddits Startup founders and growth marketers who want to scale Reddit marketing without hiring a team PR and reputation teams monitoring brand sentiment and responding to discussions in real-time Product marketers seeking organic engagement opportunities in product-related communities Any business that wants to build authentic Reddit presence while avoiding spammy marketing tactics --- Setup Steps Setup time: Approx. 30-40 minutes (credential configuration, keyword setup, Google Sheets creation, Slack integration) Requirements: Reddit account with OAuth2 application credentials (create at reddit.com/prefs/apps) OpenAI API key with GPT-4o-mini access Google account with a new Google Sheet for tracking interactions Slack workspace with posting permissions to a marketing/monitoring channel Brand keywords and subreddit strategy prepared Create Reddit OAuth Application: Visit reddit.com/prefs/apps, create a "script" type app, and obtain your client ID and secret Configure Reddit Credentials in n8n: Add Reddit OAuth2 credentials with your app credentials and authorize access Set up OpenAI API: Obtain API key from platform.openai.com and configure in n8n OpenAI credentials Create Google Sheet: Set up a new sheet with columns: timestamp, postId, postTitle, subreddit, postUrl, sentiment, engagementScore, responseType, commentPosted, reasoning Configure these nodes: Brand Keywords Config: Edit the JavaScript code to include your brand name, product names, and relevant industry keywords Search Brand Mentions: Adjust the limit (default 50) and sort preference based on your needs AI Post Analysis: Customize the prompt to match your brand voice and engagement guidelines Filter Engagement-Worthy: Adjust the engagementScore threshold (default 60) based on your quality standards Loop Through Posts: Configure max iterations and batch size for rate limit compliance Log to Google Sheets: Replace YOURSHEETID with your actual Google Sheets document ID Send Slack Report: Replace YOURCHANNELID with your Slack channel ID Test the workflow: Run manually first to verify all connections work and adjust AI prompts Activate for daily runs: Once tested, activate the Schedule Trigger to run automatically every 24 hours --- Node Descriptions (10 words each) Daily Marketing Check - Schedule trigger runs workflow every 24 hours automatically daily Brand Keywords Config - JavaScript code node defining brand keywords to monitor Reddit Search Brand Mentions - Reddit node searches all subreddits for brand keyword mentions AI Post Analysis - OpenAI analyzes sentiment, relevance, generates contextual helpful comment responses Filter Engagement-Worthy - Conditional node filters only high-quality relevant posts worth engaging Loop Through Posts - Split in batches processes each post individually respecting limits Post Helpful Comment - Reddit node posts AI-generated comment to worthy Reddit discussions Log to Google Sheets - Appends all interaction data to spreadsheet for permanent tracking Generate Daily Summary - JavaScript aggregates metrics, sentiment breakdown, generates comprehensive daily report Send Slack Report - Posts formatted daily summary with metrics to team Slack channel
Generate WordPress blog posts with GPT-4O and Pixabay featured images via form
This workflow automates the creation of a draft article for a blog Use Cases Rapidly generate blog content from simple prompts. Ensure content consistency and speed up time-to-publish. Automatically source and attach relevant featured images. Save your digital marketing team significant time. (Personalized touch based on your experience) Prerequisites/Requirements An OpenAI API Key (for GPT-4O). A Pixabay API Key (for image sourcing). A WordPress site URL and API credentials (username/password or application password). Customization Options Adjust the AI prompt in the AI Content Generation node to change the content tone and style. Modify the search query in the Pixabay Query HTTP node to influence the featured image selection. Change the reviewer email address in the final Send Review Notification node.
Score SDK documentation localization readiness with Azure GPT-4o-mini and Slack alerts
Description: Make your SDK documentation localization-ready before translation with this n8n automation template. The workflow pulls FAQ content from Notion, evaluates each entry using Azure OpenAI GPT-4o-mini, and scores its localization readiness based on jargon density, cultural context, and translation risk. It logs results into Google Sheets and notifies your team on Slack if an FAQ scores poorly (β€5). Perfect for developer documentation teams, localization managers, and globalization leads who want to identify high-risk content early and ensure smooth translation for multi-language SDKs. β What This Template Does (Step-by-Step) βοΈ Step 1: Fetch FAQs from Notion Retrieves all FAQ entries from your Notion database, including question, answer, and unique ID fields for tracking. π€ Step 2: AI Localization Review (GPT-4o-mini) Uses Azure OpenAI GPT-4o-mini to evaluate each FAQ for localization challenges such as: Heavy use of technical or cultural jargon Region-specific policy or legal references Non-inclusive or ambiguous phrasing Potential mistranslation risk Outputs a detailed report including: Score (1β10) β overall localization readiness Detected Issues β list of problematic elements Priority β high, medium, or low for translation sequencing Recommendations β actionable rewrite suggestions π§© Step 3: Parse AI Response Converts the raw AI output into structured JSON (score, issues, priority, recommendations) for clean logging and filtering. π Step 4: Log Results to Google Sheets Appends one row per FAQ, storing fields like Question, Score, Priority, and Recommendations β creating a long-term localization quality tracker. π¦ Step 5: Filter High-Risk Content (Score β€5) Flags FAQs with low localization readiness for further review, ensuring that potential translation blockers are addressed first. π’ Step 6: Send Slack Alerts Sends a Slack message with summary details for all high-risk FAQs β including their score and key issues β keeping localization teams informed in real time. π§ Key Features π AI-powered localization scoring for SDK FAQs π€ Azure OpenAI GPT-4o-mini integration π Google Sheets-based performance logging π’ Slack notifications for at-risk FAQs βοΈ Automated Notion-to-AI-to-Sheets pipeline πΌ Use Cases π§Ύ Audit SDK documentation before translation π Prioritize localization tasks based on content risk π§ Identify FAQs that need rewriting for non-native audiences π’ Keep global documentation teams aligned on translation readiness π¦ Required Integrations Notion API β to fetch FAQ entries Azure OpenAI (GPT-4o-mini) β for AI evaluation Google Sheets API β for logging structured results Slack API β for sending alerts on high-risk FAQs π― Why Use This Template? β Detect localization blockers early in your SDK documentation β Automate readiness scoring across hundreds of FAQs β Reduce translation rework and cultural misinterpretation β Ensure a globally inclusive developer experience