Repurpose YouTube videos to multiple content types with OpenRouter AI and Airtable
YouTube Content Repurposing Automation Who's it for This workflow is for content creators, marketers, agencies, coaches, and businesses who want to maximize their YouTube content ROI by automatically generating multiple content assets from single videos. It's especially useful for professionals who want to: Repurpose YouTube videos into blogs, social posts, newsletters, and tutorials without manual effort Scale their content production across multiple channels and platforms Create consistent, high-quality content derivatives while saving time and resources Build automated content systems that generate multiple revenue streams Maintain active presence across social media, email, and blog platforms simultaneously What problem is this workflow solving Content creators face significant challenges when trying to maximize their video content: Time-intensive manual repurposing: Converting one YouTube video into multiple content formats traditionally requires hours of manual writing, editing, and formatting across different platforms. Inconsistent content quality: Manual repurposing often leads to varying quality levels and missed opportunities to optimize content for specific platforms. High costs for content services: Hiring ghostwriters or content agencies to repurpose videos can cost thousands of dollars monthly. Scaling bottlenecks: Manual processes prevent creators from efficiently scaling their content across multiple channels and formats. This workflow solves these problems by automatically extracting YouTube video transcripts, using AI to generate multiple high-quality content formats (tutorials, blog posts, social media content, newsletters), and organizing everything in Airtable for easy management and distribution. How it works Automated Video Processing Starts with a manual trigger and retrieves YouTube URLs from your Airtable configuration, processing only videos marked as "selected" while filtering out those marked for deletion. Intelligent Transcript Extraction Uses Scrape Creator API to extract video transcripts, automatically cleaning and formatting the text for optimal AI processing and content generation. Multi-Format Content Generation Leverages OpenRouter models, o you can easily test different AI models and choose the one that delivers the best results for your needs: Step-by-step tutorials with code snippets and technical details YouTube scripts with hooks, titles, and conclusions Blog posts optimized for lead generation Structured summaries with key takeaways LinkedIn posts with engagement triggers Newsletter content for email marketing Twitter/X posts for social media Smart Content Filtering Processes only the content types you've selected in Airtable, ensuring efficient resource usage and faster execution times. Automated Content Organization Matches and combines all generated content pieces by URL, then updates your Airtable with complete, ready-to-use content assets organized by type and source video. How to set up Required credentials OpenRouter API key Airtable Personal Access Token Scrape Creators API Key - For YouTube transcript extraction and processing Airtable base setup Create an Airtable base with one main table: Videos Table: title (Single line text): Video title for reference url (URL): YouTube video URL to process Status (Single select): Options: "selected", "delete", "processed" output (Multiple select): Content types to generate summary tutorial blog-post linkedin newsletter tweeter youtube summary (Long text): Generated video summary tutorial (Long text): Generated step-by-step tutorial keytakeaways (Long text): Extracted key insights blog_post (Long text): Generated blog post content linkedin (Long text): LinkedIn post content newsletter (Long text): Email newsletter content tweeter (Long text): Twitter/X post content youtube_titles (Long text): YouTube video title suggestions youtube_hook (Long text): Video opening hooks youtube_steps (Long text): Video step breakdowns youtube_conclusion (Long text): Video ending/CTAs API Configuration Scrape Creator Setup: Sign up for Scrape Creator API Obtain your API key from the dashboard Configure the HTTP Request node with your credentials Set the endpoint to: https://api.scrapecreators.com/v1/youtube/video/transcript OpenAI Setup: Create an OpenRouter account and generate an API key Workflow Configuration Import the workflow JSON into your n8n instance Update all credential references with your API keys Configure the Airtable nodes with your base and table IDs Test the workflow with a single video URL first Requirements n8n instance (self-hosted or cloud) Active API subscriptions for OpenRouter (or the LLM or your choice), Airtable, and Scrape Creator YouTube video URLs - Must be publicly accessible videos with available transcripts Airtable account - Free tier sufficient for most use cases How to customize the workflow Modify content generation prompts Edit the LLM Chain nodes to customize content style and format: Tutorial node: Adjust technical depth and formatting preferences Blog post node: Modify tone, length, and CTA strategies LinkedIn node: Customize engagement hooks and professional tone Newsletter node: Tailor subject lines and email marketing approach Adjust AI model selection Update the OpenRouter Chat Model to use different models Add new content formats Create additional LLM Chain nodes for new content types: Instagram captions TikTok scripts Podcast descriptions Course outlines
Get all the entries from Contentful
No description available.
Customer pain analysis & AI briefing with Anthropic, Reddit, X, and SerpAPI
The competitive edge, delivered. This Customer Intelligence Engine simultaneously analyzes the web, Reddit, and X/Twitter to generate a professional, actionable executive briefing. --- 🎯 Problem Statement Traditional market research for Customer Intelligence (CI) is manual, slow, and often relies on surface-level social media scraping or expensive external reports. Service companies, like HVAC providers, struggle to efficiently synthesize vast volumes of online feedback (Reddit discussions, real-time tweets, web articles) to accurately diagnose systemic service gaps (e.g., scheduling friction, poor automated systems). This inefficiency leads to delayed strategic responses and missed opportunities to invest in high-impact solutions like AI voice agents. --- ✨ Solution This workflow deploys a sophisticated Multisource Intelligence Pipeline that runs on a scheduled or ad-hoc basis. It uses parallel processing to ingest data from three distinct source types (SERP API, Reddit, and X/Twitter), employs a zero-cost Hybrid Categorization method to semantically identify operational bottlenecks, and uses the Anthropic LLM to synthesize the findings into a clear, executive-ready strategic brief. The data is logged for historical analysis while the brief is dispatched for immediate action. --- ⚙️ How It Works (Multi-Step Execution) Ingestion and Parallel Processing (The Data Fabric) Trigger: The workflow is initiated either on an ad-hoc basis via an n8n Form Trigger or on a schedule (Time Trigger). Parallel Ingestion: The workflow immediately splits into three parallel branches to fetch data simultaneously: SERP API: Captures authoritative content and industry commentary (Strategic Context*). Reddit (Looping Structure): Fetches posts from multiple subreddits via an Aggregate Node workaround to get authentic user experiences (Qualitative Signal*). X/Twitter (HTTP Request): Bypasses standard rate limits to capture real-time social complaints (Sentiment Signal*). Analysis and Fusion (The Intelligence Layer) Cleanup and Labeling (Function Nodes): Each branch uses dedicated Function Nodes to filter noise (e.g., low-score posts) and normalize the data by adding a source tag (e.g., 'Reddit'). Merge: A Merge Node (Append Mode) fuses all three parallel streams into a single, unified dataset. Hybrid Categorization (Function Node): A single Function Node applies the Hybrid Categorization Logic. This cost-free step semantically assigns a painpoint category (e.g., 'Call Hold/Availability') and a sentimentscore to every item, transforming raw text into labeled metrics. Dispatch and Reporting (The Executive Output) Aggregation and Split (Function Node): The final Function Node calculates the total counts, deduplicates the final results, and generates the comprehensive summaryString. Data Logging: The aggregated counts and metrics are appended to Google Sheets for historical logging. LLM Input Retrieval (Function Node): A final Function Node retrieves the summary data using the $items() helper (the serial route workaround). AI Briefing: The Message a model (Anthropic)* Node receives the summaryString and uses a strict HTML System Prompt to synthesize the strategic brief, identifying the top pain points and suggesting AI features. Delivery: The Gmail Node sends the final, professional HTML brief to the executive team. --- 🛠️ Setup Steps Credentials Anthropic: Configure credentials for the Language Model (Claude) used in the Message a model node. SERP API, Reddit, and X/Twitter: Configure API keys/credentials for the data ingestion nodes. Google Services: Set up OAuth2 credentials for Google Sheets (for logging data) and Gmail (for email dispatch). Configuration Form Configuration: If using the Form Trigger, ensure the Target Keywords and Target Subreddits are mapped correctly to the ingestion nodes. Data Integrity: Due to the serial route, ensure the Function (Get LLM Summary) node is correctly retrieving the LLMSUMMARYHOLDER field from the preceding node's output memory. --- ✅ Benefits Proactive CI & Strategy: Shifts market research from manual, reactive browsing to proactive, scheduled data diagnostic. Cost Efficiency: Utilizes a zero-cost Hybrid Categorization method (Function Node) for intent analysis, avoiding expensive per-item LLM token costs. Actionable Output: Delivers a fully synthesized, HTML-formatted executive brief, ready for immediate presentation and strategic sales positioning. High Reliability: Employs parallel ingestion, API workarounds, and serial routing to ensure the complex workflow runs consistently and without failure.