Back to Catalog

Turn YouTube videos into summaries, transcripts, and visual insights

Colleen BradyColleen Brady
26864 views
2/3/2026
Official Page

Who is this for?

This workflow is built for anyone who works with YouTube content, whether you're:

  • A learner looking to understand a video’s key points
  • A content creator repurposing video material
  • A YouTube manager looking to update titles, descriptions
  • A social media strategist searching for the most shareable clips

AskQuestions.png Don't just ask questions about what's said. Find out what's going on in a video too.

Video Overview: https://www.youtube.com/watch?v=Ovg_KfKxnC8

What problem does this solve?

YouTube videos hold valuable insights, but watching and processing them manually takes time. This workflow automates:

  • Quick content extraction: Summarize key ideas without watching full videos
  • Visual analysis: Understand what’s happening beyond spoken words
  • Clip discovery: Identify the best moments for social sharing

How the workflow works

This n8n-powered automation:

  1. Uses Google’s Gemini 1.5 Flash AI for intelligent video analysis
  2. Provides multiple content analysis templates tailored to different needs

What makes this workflow powerful?

The easiest place to start is by requesting a summary or transcript. From there, you can refine the prompts to match your specific use case and the type of video content you’re working with.

But what's even more amazing? You can ask questions about what’s happening in the video — and get detailed insights about the people, objects, and scenes. It's jaw-dropping.

This workflow is versatile — the actions adapt based on the values set. That means you can use a single workflow to:

  • Extract transcripts
  • Generate an extended YouTube description
  • Write a summary blog post

You can also modify the trigger based on how you want to run the workflow — use a webhook, connect it to an event in Airtable, or leave it as-is for on-demand use. The output can then be sent anywhere: Notion, Airtable, CMS platforms, or even just stored for reference.

How to set it up

  1. Connect your Google API key
  2. Paste a YouTube video URL
  3. Select an analysis method
  4. Run the workflow and get structured results

Analysis Templates

  • Basic & Timestamped Transcripts: Extract spoken content
  • Summaries: Get concise takeaways
  • Visual Scene Analysis: Detect objects, settings, and people
  • Clip Finder: Locate shareable moments
  • Actionable Insights: Extract practical information

Customization Options

  • Modify templates to fit your needs
  • Connect with external platforms
  • Adjust formatting preferences

Advanced Configuration

This workflow is designed for use with gemini-1.5-flash. In the future, you can update the flow to work with different models or even modify the HTTP request node to define which API endpoint should be used.

It's also been designed so you can use this flow on it's own or add to a new / existing worflow.

This workflow helps you get the most out of YouTube content — quickly and efficiently.

n8n Workflow: Basic Workflow Structure Example

This n8n workflow provides a foundational example of how to structure a workflow using common core nodes like HTTP Request, If, Set, Switch, and Code. It's designed to illustrate basic data manipulation, conditional logic, and external API interaction within an n8n environment.

Description

This workflow serves as a template for building more complex automations. It demonstrates how to initiate a workflow manually, make an HTTP request, apply conditional logic to route data, transform data fields, and execute custom JavaScript code.

What it does

  1. Starts Manually: The workflow is triggered manually, allowing for on-demand execution.
  2. Makes an HTTP Request: It performs an HTTP request, which can be configured to interact with any external API or web service.
  3. Applies Conditional Logic (If): Data from the HTTP request is then passed through an "If" node, allowing the workflow to branch based on specific conditions (e.g., success/failure of the HTTP request, specific data values).
  4. Transforms Data (Edit Fields - Set): On one branch (likely the "true" or success path), an "Edit Fields (Set)" node is used to manipulate or add new fields to the incoming data.
  5. Applies Multi-Conditional Logic (Switch): Another "Switch" node is used to introduce more complex branching based on multiple possible values of a specific data field.
  6. Executes Custom Code: A "Code" node is included to demonstrate how to run custom JavaScript logic, which can be used for advanced data processing, formatting, or integration tasks.
  7. Includes Sticky Note: A "Sticky Note" is present for documentation purposes, allowing users to add comments or explanations directly within the workflow canvas.

Prerequisites/Requirements

  • n8n Instance: You need a running n8n instance (cloud or self-hosted) to import and execute this workflow.
  • No specific external services are strictly required for this basic structure to run, but the HTTP Request node will likely need to be configured to interact with a specific API for a real-world use case.

Setup/Usage

  1. Import the Workflow:
    • Copy the provided JSON code.
    • In your n8n instance, go to "Workflows" and click "New".
    • Click on the "Import from JSON" button (usually a cloud icon with an arrow pointing down) and paste the JSON code.
    • Click "Import".
  2. Configure Nodes (Optional but Recommended):
    • HTTP Request (Node 19): Configure the URL, method, headers, and body according to the API you wish to call.
    • If (Node 20): Define the conditions for branching based on the output of the HTTP Request node.
    • Edit Fields (Set) (Node 38): Specify which fields to add, modify, or remove.
    • Switch (Node 112): Define the expression and cases for multi-conditional routing.
    • Code (Node 834): Write your custom JavaScript code to process data as needed.
  3. Execute the Workflow:
    • Click the "Execute Workflow" button (play icon) in the top right corner of the n8n editor. Since it starts with a "Manual Trigger," this will initiate the flow.

This workflow serves as an excellent starting point for understanding fundamental n8n concepts and can be easily expanded upon for specific automation needs.

Related Templates

Automate Reddit brand monitoring & responses with GPT-4o-mini, Sheets & Slack

How it Works This workflow automates intelligent Reddit marketing by monitoring brand mentions, analyzing sentiment with AI, and engaging authentically with communities. Every 24 hours, the system searches Reddit for posts containing your configured brand keywords across all subreddits, finding up to 50 of the newest mentions to analyze. Each discovered post is sent to OpenAI's GPT-4o-mini model for comprehensive analysis. The AI evaluates sentiment (positive/neutral/negative), assigns an engagement score (0-100), determines relevance to your brand, and generates contextual, helpful responses that add genuine value to the conversation. It also classifies the response type (educational/supportive/promotional) and provides reasoning for whether engagement is appropriate. The workflow intelligently filters posts using a multi-criteria system: only posts that are relevant to your brand, score above 60 in engagement quality, and warrant a response type other than "pass" proceed to engagement. This prevents spam and ensures every interaction is meaningful. Selected posts are processed one at a time through a loop to respect Reddit's rate limits. For each worthy post, the AI-generated comment is posted, and complete interaction data is logged to Google Sheets including timestamp, post details, sentiment, engagement scores, and success status. This creates a permanent audit trail and analytics database. At the end of each run, the workflow aggregates all data into a comprehensive daily summary report with total posts analyzed, comments posted, engagement rate, sentiment breakdown, and the top 5 engagement opportunities ranked by score. This report is automatically sent to Slack with formatted metrics, giving your team instant visibility into your Reddit marketing performance. --- Who is this for? Brand managers and marketing teams needing automated social listening and engagement on Reddit Community managers responsible for authentic brand presence across multiple subreddits Startup founders and growth marketers who want to scale Reddit marketing without hiring a team PR and reputation teams monitoring brand sentiment and responding to discussions in real-time Product marketers seeking organic engagement opportunities in product-related communities Any business that wants to build authentic Reddit presence while avoiding spammy marketing tactics --- Setup Steps Setup time: Approx. 30-40 minutes (credential configuration, keyword setup, Google Sheets creation, Slack integration) Requirements: Reddit account with OAuth2 application credentials (create at reddit.com/prefs/apps) OpenAI API key with GPT-4o-mini access Google account with a new Google Sheet for tracking interactions Slack workspace with posting permissions to a marketing/monitoring channel Brand keywords and subreddit strategy prepared Create Reddit OAuth Application: Visit reddit.com/prefs/apps, create a "script" type app, and obtain your client ID and secret Configure Reddit Credentials in n8n: Add Reddit OAuth2 credentials with your app credentials and authorize access Set up OpenAI API: Obtain API key from platform.openai.com and configure in n8n OpenAI credentials Create Google Sheet: Set up a new sheet with columns: timestamp, postId, postTitle, subreddit, postUrl, sentiment, engagementScore, responseType, commentPosted, reasoning Configure these nodes: Brand Keywords Config: Edit the JavaScript code to include your brand name, product names, and relevant industry keywords Search Brand Mentions: Adjust the limit (default 50) and sort preference based on your needs AI Post Analysis: Customize the prompt to match your brand voice and engagement guidelines Filter Engagement-Worthy: Adjust the engagementScore threshold (default 60) based on your quality standards Loop Through Posts: Configure max iterations and batch size for rate limit compliance Log to Google Sheets: Replace YOURSHEETID with your actual Google Sheets document ID Send Slack Report: Replace YOURCHANNELID with your Slack channel ID Test the workflow: Run manually first to verify all connections work and adjust AI prompts Activate for daily runs: Once tested, activate the Schedule Trigger to run automatically every 24 hours --- Node Descriptions (10 words each) Daily Marketing Check - Schedule trigger runs workflow every 24 hours automatically daily Brand Keywords Config - JavaScript code node defining brand keywords to monitor Reddit Search Brand Mentions - Reddit node searches all subreddits for brand keyword mentions AI Post Analysis - OpenAI analyzes sentiment, relevance, generates contextual helpful comment responses Filter Engagement-Worthy - Conditional node filters only high-quality relevant posts worth engaging Loop Through Posts - Split in batches processes each post individually respecting limits Post Helpful Comment - Reddit node posts AI-generated comment to worthy Reddit discussions Log to Google Sheets - Appends all interaction data to spreadsheet for permanent tracking Generate Daily Summary - JavaScript aggregates metrics, sentiment breakdown, generates comprehensive daily report Send Slack Report - Posts formatted daily summary with metrics to team Slack channel

Daniel ShashkoBy Daniel Shashko
679

Screen DPDP consent manager registrations with GPT-4o, Google Sheets and Gmail

📘 Description This workflow automates the complete DPDP-aligned Consent Manager Registration screening pipeline — from intake to eligibility evaluation and final compliance routing. Every incoming registration request is normalized, validated, logged, evaluated by an AI compliance engine (GPT-4o), and then routed into either approval or rejection flows. It intelligently handles missing documentation (treated as a minor issue), evaluates financial/technical/operational capacity, generates structured eligibility JSON, updates registration records in Google Sheets, and sends outcome-specific emails to applicants and compliance teams. The workflow creates a full audit trail while reducing manual screening workload and ensuring consistent eligibility decisions. ⚙️ What This Workflow Does (Step-by-Step) ▶️ Receive Consent Registration Event (Webhook) Collects incoming Consent Manager registration applications and triggers the processing pipeline. 🧹 Extract & Normalize Registration Payload (Code Node) Cleans the body payload and extracts key fields: action, organizationName, applicationType, contactEmail, netWorth, technicalCapacity, operationalCapacity, documentAttached, submittedAt. 🔍 Validate Registration Payload Structure (IF Node) Checks the presence of mandatory fields. Valid → continue to eligibility evaluation Invalid → log in the audit sheet. 📄 Log Invalid Registration Requests to Sheet (Google Sheets) Stores malformed or incomplete submissions for audit, follow-up, and retry handling. 📝 Write Initial Registration Entry to Sheet (Google Sheets) Creates the initial intake row in the master registration sheet before applying eligibility logic. 🧠 Configure GPT-4o — Eligibility Evaluation Model (Azure OpenAI) Prepares the AI model used to determine whether the applicant meets DPDP’s eligibility criteria. 🤖 AI Eligibility Evaluator (DPDP Compliance) Analyzes applicant data and evaluates their eligibility based on: financial capacity, technical capability, operational readiness, and documentation status. Missing documents → NOT a rejection condition. Returns strictly formatted JSON with: eligible, riskLevel, decisionReason, missingItems, recommendedNextSteps. 🧼 Parse AI Eligibility JSON Output (Code Node) Converts AI output into valid JSON by removing markdown artifacts and ensuring safe parsing. 🔎 Validate Eligibility Status (IF Node) Routes the outcome: Eligible → approval workflow Ineligible → rejection email. 📧 Send Rejection Email to Applicant (Gmail) Sends a structured rejection email listing issues and re-submission instructions. 🔗 Merge Registration + Eligibility Summary (Code Node) Combines raw registration data with AI eligibility results into one unified JSON package. 📬 Send Approval Email to Compliance Team (Gmail) Notifies compliance officers that an applicant passed eligibility and is ready for verification. 🧩 Prepare Status Update Fields (Set Node) Constructs the final status value (e.g., “passed”) for updating the database. 📘 Update Registration Status in Sheet (Google Sheets) Updates the applicant’s record using contactEmail as the key, marking the final eligibility status. 🧩 Prerequisites Azure OpenAI (GPT-4o) credentials Gmail OAuth connection Google Sheets OAuth connection Valid webhook endpoint for intake 💡 Key Benefits ✔ Fully automates DPDP Consent Manager registration screening ✔ AI-driven eligibility evaluation with standardized JSON output ✔ Smart handling of missing documents without unnecessary rejections ✔ Automatic routing to approval or rejection flows ✔ Complete audit logs for all submissions ✔ Reduces manual review time and improves consistency 👥 Perfect For DPDP compliance teams Regulatory operations units SaaS platforms handling consent manager onboarding Organizations managing structured eligibility workflows

Rahul JoshiBy Rahul Joshi
13

Voice AI chatbot with OpenAI, RAG (Qdrant) & Guardrails for WordPress

This workflow implements a complete Voice AI Chatbot system for Wordress that integrates speech recognition, guardrails for safety, retrieval-augmented generation (RAG), Qdrant vector search, and audio responses. It is designed to be connected to a WordPress Voicebot AI plugin through a webhook endpoint. --- Key Advantages ✅ Complete Voice AI Pipeline* The workflow handles: audio input STT intelligent processing TTS output All within a single automated process. ✅ Safe and Policy-Compliant Thanks to the Guardrails module, the system automatically: detects harmful or disallowed requests blocks them responds safely This protects both the user and the business. ✅ Contextual and Memory-Based Conversations The Window Buffer Memory tied to unique session IDs enables: continuous conversation flow natural dialogue better understanding of context ✅ Company-Specific Knowledge via RAG By integrating Qdrant as a vector store, the system can: retrieve business documentation give accurate and up-to-date answers support personalized content This makes the chatbot far more powerful than a standard LLM. ✅ Modular and Extensible Architecture Because everything is modular inside n8n, you can: swap OpenAI with other models add new tools or knowledge sources change prompts or capabilities without redesigning the entire workflow. ✅ *Easy WordPress Integration The workflow connects directly to a WordPress Voicebot plugin, meaning: no custom backend development simple deployment fast integration for websites ✅ Automatic Indexing of Documents The second workflow section: fetches Google Drive files converts them into embeddings indexes them into Qdrant This lets you maintain your knowledge base with almost no manual work. --- How It Works This workflow creates a Wordpress voice-enabled AI chatbot that processes audio inputs and provides contextual responses using RAG (Retrieval-Augmented Generation) from a Qdrant vector database. The system operates as follows: Audio Processing Pipeline: Receives audio input via webhook and converts speech to text using OpenAI's STT (Speech-to-Text) Applies guardrails to detect inappropriate content or jailbreak attempts using a separate GPT-4.1-mini model Routes safe queries to the AI agent and blocks unsafe content with a default response AI Agent with Contextual Memory: Uses OpenAI Chat Model with window buffer memory to maintain conversation context Equips the agent with two tools: Calculator for computations and RAG tool for business knowledge retrieval The RAG system queries Qdrant vector store containing company documents using OpenAI embeddings Response Generation: Generates appropriate text responses based on query type and available knowledge Converts approved responses to audio using OpenAI's TTS (Text-to-Speech) with "onyx" voice Returns binary audio responses to the webhook caller --- Set Up Steps Vector Database Preparation: Create Qdrant collection via HTTP request with specified vector configuration Clear existing collection data before adding new documents Set up Google Drive integration to source documents from specific folders Document Processing Pipeline: Search and retrieve files from Google Drive folder "Test Negozio" Process documents through recursive text splitting (500 chunk size, 50 overlap) Generate embeddings using OpenAI and store in Qdrant vector store Implement batch processing with 5-second delays between operations System Configuration: Configure webhook endpoint for receiving audio inputs Set up multiple OpenAI accounts for different functions (STT, TTS, guardrails, main agent) Establish Qdrant API connections for vector storage and retrieval Implement session-based memory management using session IDs from webhook headers WordPress Integration: Install the provided Voicebot AI Agent WordPress plugin Configure the plugin with the webhook URL to connect to this n8n workflow The system is now ready to receive audio queries and respond with voice answers The workflow handles both real-time voice queries and background document processing, creating a comprehensive voice assistant solution with business-specific knowledge retrieval capabilities. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
706