Back to Catalog
Growth AI

Growth AI

n8n automation expert eliminating repetitive tasks for businesses. Specializing in native API integrations (HubSpot, Pipedrive, Notion, Gmail, Slack) with custom workflows tailored to your tech stack. GDPR-compliant European hosting with AI agents. From lead qualification to content generation - I architect battle-tested automations that run 24/7. Your manual routines disappear without pain.

Total Views20,370
Templates15

Templates by Growth AI

Monitor social media trends across Reddit, Instagram & TikTok with Apify

Who's it for Social media managers, content creators, brand managers, and marketing teams who need to track keyword performance and trending content across TikTok, Instagram, and Reddit for competitive analysis and content inspiration. What it does This workflow automatically monitors trending content across three major social media platforms using specified keywords. It scrapes posts from TikTok, Instagram, and Reddit, calculates engagement scores using platform-specific metrics, ranks content by performance, and generates a comprehensive HTML email report with the top-performing posts across all platforms. How it works The workflow follows a sequential multi-platform scraping process: Reddit Scraping: Searches for keyword-based posts and comments with engagement metrics Instagram Monitoring: Analyzes hashtag-based content with likes and comments data TikTok Analysis: Tracks hashtag performance including views, likes, shares, and comments Score Calculation: Applies platform-specific scoring algorithms based on engagement metrics Unified Ranking: Combines and ranks all content across platforms by engagement score Report Generation: Creates a detailed HTML email report with top performers and analytics Requirements Apify account with API access Gmail account for report delivery Platform-specific scrapers: Reddit Scraper Lite, Instagram Scraper, TikTok Scraper How to set up Step 1: Configure Apify credentials Set up Apify HTTP header authentication in n8n Ensure access to the required scrapers: Reddit: trudax~reddit-scraper-lite Instagram: apify~instagram-scraper TikTok: clockworks~tiktok-scraper Step 2: Customize search parameters Reddit configuration: Search terms: Modify "searches" array with your keywords Content type: Posts and comments (searchComments can be enabled) Sort method: "top" (alternatives: hot, new, relevance) Time period: "month" (alternatives: hour, day, week, year, all) Result limits: maxItems: 50, maxPostCount: 25 Instagram configuration: Hashtag URLs: Update directUrls with target hashtags Results type: "posts" (alternatives: stories, reels) Time filter: "onlyPostsNewerThan": "7 days" Result limit: resultsLimit: 15 TikTok configuration: Hashtags: Update hashtags array with target keywords Results per page: resultsPerPage: 20 Time filter: "oldestPostDateUnified": "7 days" Step 3: Set up email reporting Configure Gmail OAuth2 credentials Update recipient email address in "Send a message" node Customize email subject and styling as needed Step 4: Adjust scoring algorithms Current scoring formulas: Reddit: (upvotes × 1) + (comments × 2) Instagram: (likes × 1) + (comments × 2) TikTok: (likes × 1) + (comments × 2) + (shares × 3) + (views ÷ 1000) Modify the code nodes to adjust scoring based on your priorities. How to customize the workflow Keyword and hashtag targeting Multiple keywords: Add arrays of search terms for broader monitoring Brand-specific terms: Include brand names, product names, competitor analysis Seasonal tracking: Adjust keywords based on campaigns or seasonal trends Negative filtering: Exclude irrelevant content with filtering logic Platform-specific customization Reddit enhancements: Subreddit targeting: Focus on specific communities Comment analysis: Enable comment scraping for deeper insights User profiling: Track specific user activity and influence Instagram modifications: Story monitoring: Track story mentions and hashtag usage Influencer tracking: Monitor specific account performance Location-based: Add geo-targeted hashtag monitoring TikTok optimizations: Trend detection: Identify viral sounds and effects Creator analysis: Track trending creators in your niche Challenge monitoring: Follow hashtag challenge performance Scoring and ranking customization Weighted metrics: Adjust multipliers based on platform importance Recency factors: Give bonus points to newer content Quality filters: Exclude low-engagement or spam content Sentiment analysis: Integrate sentiment scoring for brand monitoring Reporting enhancements Multiple recipients: Send reports to different team members Scheduled execution: Add scheduling triggers for automated monitoring Data export: Save results to spreadsheets or databases Alert thresholds: Set up notifications for high-performing content Engagement scoring methodology Platform-specific algorithms Reddit scoring logic: Emphasizes community engagement through upvotes and discussion Comments weighted higher (×2) as they indicate deeper engagement Filters out low-quality posts and spam content Instagram scoring approach: Balances visual appeal (likes) with engagement depth (comments) Focuses on recent content to capture trending moments Excludes carousel sub-items to avoid duplicate counting TikTok scoring system: Multi-factor algorithm considering all engagement types Views normalized (÷1000) to balance with other metrics Shares heavily weighted (×3) as they indicate viral potential Level classification Content automatically categorized into performance tiers: High: Score ≥ 10,000 (viral or highly engaging content) Medium: Score ≥ 1,000 (good engagement, worth monitoring) Low: Score < 1,000 (baseline engagement) Results interpretation Comprehensive analytics dashboard The email report includes: Cross-platform leaderboard: Top 15 posts ranked by engagement score Platform breakdown: Performance summary by social network Engagement metrics: Detailed scoring and classification Direct links: Clickable access to original content Author tracking: Creator identification for influencer outreach Actionable insights Content inspiration: Identify high-performing content formats and topics Competitor analysis: Monitor competitor content performance Trend identification: Spot emerging topics before they peak Influencer discovery: Find creators driving engagement in your niche Use cases Brand monitoring and competitive analysis Brand mention tracking: Monitor how your brand performs across platforms Competitor surveillance: Track competitor content and engagement rates Crisis management: Early detection of negative sentiment or issues Market positioning: Understand your brand's social media presence Content strategy optimization Content format analysis: Identify which content types perform best Hashtag research: Discover effective hashtags for your niche Posting timing: Analyze when high-engagement content is published Trend forecasting: Spot emerging trends for proactive content creation Influencer and partnership identification Creator discovery: Find influential voices in your industry Partnership evaluation: Assess potential collaborator engagement rates Campaign performance: Track sponsored content and brand partnerships Community building: Identify active community members and advocates Workflow limitations API rate limiting: Subject to Apify scraper limitations and quotas Platform restrictions: Some content may be private or restricted Real-time delays: 30-second waits between platform scraping prevent rate limiting Manual execution: Currently triggered manually (easily schedulable) Single keyword focus: Current setup optimized for one keyword at a time Platform availability: Dependent on third-party scrapers and their maintenance

Growth AIBy Growth AI
5455

Automated content strategy with Google Trends, News, Firecrawl & Claude AI

Automated trend monitoring for content strategy Who's it for Content creators, marketers, and social media managers who want to stay ahead of emerging trends and generate relevant content ideas based on data-driven insights. What it does This workflow automatically identifies trending topics related to your industry, collects recent news articles about these trends, and generates content suggestions. It transforms raw trend data into actionable editorial opportunities by analyzing search volume growth and current news coverage. How it works The workflow follows a three-step automation process: Trend Analysis: Examines searches related to your topics and identifies those with the strongest recent growth Article Collection: Searches Google News for current articles about emerging trends and scrapes their full content Content Generation: Creates personalized content suggestions based on collected articles and trend data The system automatically excludes geo-localized searches to provide a global perspective on trends, though this can be customized. Requirements SerpAPI account (for trend and news data) Firecrawl API key (for scraping article content from Google News results) Google Sheets access AI model API key (for content analysis and recommendations - you can use any LLM provider you prefer) How to set up Step 1: Prepare your tracking sheet Duplicate this Google Sheets template Rename your copy and ensure it's accessible Step 2: Configure API credentials Before running the workflow, set up the following credentials in n8n: SerpAPI: For trend analysis and Google News search Firecrawl API: For scraping article content AI Model API: For content analysis and recommendations (Anthropic Claude, OpenAI GPT, or any other LLM provider) Google Sheets OAuth2: For accessing and updating your tracking spreadsheet Step 3: Configure your monitoring topics In your Google Sheet "Query" tab: Query column: Enter the main topics/keywords you want to monitor for trending queries (e.g., "digital marketing", "artificial intelligence", "sustainable fashion") Query to avoid column: Optionally add specific queries you want to exclude from trend analysis (e.g., brand names, irrelevant terms, or overly specific searches that don't match your content strategy) This step is crucial as these queries will be the foundation for discovering related trending topics. Step 4: Configure the workflow In the "Get Query" node, paste your duplicated Google Sheets URL in the "Document" field Ensure your Google Sheet contains your monitoring topics in the Query column Step 5: Customize language and location settings The workflow is currently configured for French content and France location. You can modify these settings in the SerpAPI nodes: Language (hl): Change from "fr" to your preferred language code Geographic location (geo/gl): Change from "FR" to your target country code Date range: Currently set to "today 1-m" (last month) but can be adjusted Step 6: Adjust filtering (optional) The "Sorting Queries" node excludes geo-localized queries by default. You can modify the AI agent's instructions to include location-specific queries or change filtering criteria based on your requirements. The system will also automatically exclude any queries you've listed in the "Query to avoid" column. Step 7: Configure scheduling (optional) The workflow includes an automated scheduler that runs monthly (1st day of each month at 8 AM). You can modify the cron expression 0 8 1 in the Schedule Trigger node to change: Frequency (daily, weekly, monthly) Time of execution Day of the month How to customize the workflow Change trend count: The workflow processes up to 10 related queries per topic but filters them through AI to select the most relevant non-geolocalized ones Adjust article collection: Currently collects exactly 3 news articles per query for analysis Content style: Customize the AI prompts in content generation nodes to match your brand voice Output format: Modify the Google Sheets structure to include additional data points AI model: Replace the Anthropic model with your preferred LLM provider Scraping options: Configure Firecrawl settings to extract specific content elements from articles Results interpretation For each monitored topic, the workflow generates a separate sheet named by month and topic (e.g., "January Digital Marketing") containing: Data structure (four columns): Query: The trending search term ranked by growth Évolution: Growth percentage over the last month News: Links to 3 relevant news articles Idée: AI-generated content suggestions based on comprehensive article analysis The workflow provides monthly retrospective analysis, helping you identify emerging topics before competitors and optimize your content calendar with high-potential subjects. Workflow limitations Processes up to 10 related queries per topic with AI filtering Collects exactly 3 news articles per query Results are automatically organized in monthly sheets Requires stable internet connection for API calls

Growth AIBy Growth AI
4146

Batch scrape website URLs from Google Sheets to Google Docs with Firecrawl

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Firecrawl batch scraping to Google Docs Who's it for AI chatbot developers, content managers, and data analysts who need to extract and organize content from multiple web pages for knowledge base creation, competitive analysis, or content migration projects. What it does This workflow automatically scrapes content from a list of URLs and converts each page into a structured Google Doc in markdown format. It's designed for batch processing multiple pages efficiently, making it ideal for building AI knowledge bases, analyzing competitor content, or migrating website content to documentation systems. How it works The workflow follows a systematic scraping process: URL Input: Reads a list of URLs from a Google Sheets template Data Validation: Filters out empty rows and already-processed URLs Batch Processing: Loops through each URL sequentially Content Extraction: Uses Firecrawl to scrape and convert content to markdown Document Creation: Creates individual Google Docs for each scraped page Progress Tracking: Updates the spreadsheet to mark completed URLs Final Notification: Provides completion summary with access to scraped content Requirements Firecrawl API key (for web scraping) Google Sheets access Google Drive access (for document creation) Google Sheets template (provided) How to set up Step 1: Prepare your template Copy the Google Sheets template Create your own version for personal use Ensure the sheet has a tab named "Page to doc" List all URLs you want to scrape in the "URL" column Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For web content scraping and markdown conversion Google Sheets OAuth2: For reading URLs and updating progress Google Drive OAuth2: For creating content documents Step 3: Set up your Google Drive folder The workflow saves scraped content to a specific Drive folder Default folder: "Contenu scrapé" (Content Scraped) Folder ID: 1ry3xvQ9UqM2Rf9C4-AoJdg1lfB9inh_5 (customize this to your own folder) Create your own folder and update the folder ID in the "Create file markdown scraping" node Step 4: Choose your trigger method Option A: Chat interface Use the default chat trigger Send your Google Sheets URL through the chat interface Option B: Manual trigger Replace chat trigger with manual trigger Set the Google Sheets URL as a variable in the "Get URL" node How to customize the workflow URL source customization Sheet name: Change "Page to doc" to your preferred tab name Column structure: Modify field mappings if using different column names URL validation: Adjust filtering criteria for URL format requirements Batch size: The workflow processes all URLs sequentially (no batch size limit) Scraping configuration Firecrawl options: Add specific scraping parameters (wait times, JavaScript rendering) Content format: Currently outputs markdown (can be modified for other formats) Error handling: The workflow continues processing even if individual URLs fail Retry logic: Add retry mechanisms for failed scraping attempts Output customization Document naming: Currently uses the URL as document name (customizable) Folder organization: Create subfolders for different content types File format: Switch from Google Docs to other formats (PDF, TXT, etc.) Content structure: Add headers, metadata, or formatting to scraped content Progress tracking enhancements Status columns: Add more detailed status tracking (failed, retrying, etc.) Metadata capture: Store scraping timestamps, content length, etc. Error logging: Track which URLs failed and why Completion statistics: Generate summary reports of scraping results Use cases AI knowledge base creation E-commerce product pages: Scrape product descriptions and specifications for chatbot training Documentation sites: Convert help articles into structured knowledge base content FAQ pages: Extract customer service information for automated support systems Company information: Gather about pages, services, and team information Content analysis and migration Competitor research: Analyze competitor website content and structure Content audits: Extract existing content for analysis and optimization Website migrations: Backup content before site redesigns or platform changes SEO analysis: Gather content for keyword and structure analysis Research and documentation Market research: Collect information from multiple industry sources Academic research: Gather content from relevant web sources Legal compliance: Document website terms, policies, and disclaimers Brand monitoring: Track content changes across multiple sites Workflow features Smart processing logic Duplicate prevention: Skips URLs already marked as "Scrapé" (scraped) Empty row filtering: Automatically ignores rows without URLs Sequential processing: Handles one URL at a time to avoid rate limiting Progress updates: Real-time status updates in the source spreadsheet Error handling and resilience Graceful failures: Continues processing remaining URLs if individual scrapes fail Status tracking: Clear indication of completed vs. pending URLs Completion notification: Summary message with link to scraped content folder Manual restart capability: Can resume processing from where it left off Results interpretation Organized content output Each scraped page creates: Individual Google Doc: Named with the source URL Markdown formatting: Clean, structured content extraction Metadata preservation: Original URL and scraping timestamp Organized storage: All documents in designated Google Drive folder Progress tracking The source spreadsheet shows: URL list: Original URLs to be processed Status column: "OK" for completed, empty for pending Real-time updates: Progress visible during workflow execution Completion summary: Final notification with access instructions Workflow limitations Sequential processing: Processes URLs one at a time (prevents rate limiting but slower for large lists) Google Drive dependency: Requires Google Drive for document storage Firecrawl rate limits: Subject to Firecrawl API limitations and quotas Single format output: Currently outputs only Google Docs (easily customizable) Manual setup: Requires Google Sheets template preparation before use No content deduplication: Creates separate documents even for similar content

Growth AIBy Growth AI
1811

Generate UGC videos from product images with Gemini and VEO3

N8N UGC Video Generator - Setup Instructions Transform Product Images into Professional UGC Videos with AI This powerful n8n workflow automatically converts product images into professional User-Generated Content (UGC) videos using cutting-edge AI technologies including Gemini 2.5 Flash, Claude 4 Sonnet, and VEO3 Fast. Who's it for Content creators looking to scale video production E-commerce businesses needing authentic product videos Marketing agencies creating UGC campaigns for clients Social media managers requiring quick video content How it works The workflow operates in 4 distinct phases: Phase 0: Setup - Configure all required API credentials and services Phase 1: Image Enhancement - AI analyzes and optimizes your product image Phase 2: Script Generation - Creates authentic dialogue scripts based on your input Phase 3: Video Production - Generates and merges professional video segments Requirements Essential Services & APIs Telegram Bot Token (create via @BotFather) OpenRouter API with Gemini 2.5 Flash access Anthropic API for Claude 4 Sonnet KIE.AI Account with VEO3 Fast access N8N Instance (cloud or self-hosted) Technical Prerequisites Basic understanding of n8n workflows API key management experience Telegram bot creation knowledge How to set up Step 1: Service Configuration Create Telegram Bot Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token for later use OpenRouter Setup Sign up at openrouter.ai Purchase credits for Gemini 2.5 Flash access Generate and save API key Anthropic Configuration Create account at console.anthropic.com Add credits to your account Generate Claude API key KIE.AI Access Register at kie.ai Subscribe to VEO3 Fast plan Obtain bearer token Step 2: N8N Credential Setup Configure these credentials in your n8n instance: Telegram API Credential Name: telegramApi Bot Token: Your Telegram bot token OpenRouter API Credential Name: openRouterApi API Key: Your OpenRouter key Anthropic API Credential Name: anthropicApi API Key: Your Anthropic key HTTP Bearer Auth Credential Name: httpBearerAuth Token: Your KIE.AI bearer token Step 3: Workflow Configuration Import the Workflow Copy the provided JSON workflow Import into your n8n instance Update Telegram Token Locate the "Edit Fields" node Replace "Your Telegram Token" with your actual bot token Configure Webhook URLs Ensure all Telegram nodes have proper webhook configurations Test webhook connectivity Step 4: Testing & Validation Test Individual Nodes Verify each API connection Check credential configurations Confirm node responses End-to-End Testing Send a test image to your Telegram bot Follow the complete workflow process Verify final video output How to customize the workflow Modify Image Enhancement Prompts Edit the HTTP Request node for Gemini Adjust the prompt text to match your style preferences Test different aspect ratios (current: 1:1 square format) Customize Script Generation Modify the Basic LLM Chain node prompt Adjust video segment duration (current: 7-8 seconds each) Change dialogue style and tone requirements Video Generation Settings Update VEO3 API parameters in HTTP Request1 node Modify aspect ratio (current: 16:9) Adjust model settings and seeds for consistency Output Customization Change final video format in MediaFX node Modify Telegram message templates Add additional processing steps before delivery Workflow Operation Phase 1: Image Reception and Enhancement User sends product image via Telegram System prompts for enhancement instructions Gemini AI analyzes and optimizes image Enhanced square-format image returned Phase 2: Analysis and Script Creation System requests dialogue concept from user AI analyzes image details and environment Claude generates realistic 2-segment script Scripts respect physical constraints of original image Phase 3: Video Generation Two separate videos generated using VEO3 System monitors generation status Videos merged into single flowing sequence Final video delivered via Telegram Troubleshooting Common Issues API Rate Limits: Implement delays between requests Webhook Failures: Verify URL configurations and SSL certificates Video Generation Timeouts: Increase wait node duration Credential Errors: Double-check all API keys and permissions Error Handling The workflow includes automatic error detection: Failed video generation triggers error message Status checking prevents infinite loops Alternative outputs for different scenarios Advanced Features Batch Processing Modify trigger to handle multiple images Add queue management for high-volume usage Implement user session tracking Custom Branding Add watermarks or logos to generated videos Customize color schemes and styling Include brand-specific dialogue templates Analytics Integration Track usage metrics and success rates Monitor API costs and optimization opportunities Implement user behavior analytics Cost Optimization API Usage Management Monitor token consumption across services Implement caching for repeated requests Use lower-cost models for testing phases Efficiency Improvements Optimize image sizes before processing Implement smart retry mechanisms Use batch processing where possible This workflow transforms static product images into engaging, professional UGC videos automatically, saving hours of manual video creation while maintaining high quality output perfect for social media platforms.

Growth AIBy Growth AI
1631

Build multi-client agentic RAG document processing pipeline with Supabase Vector DB

Ultimate n8n Agentic RAG Template Author: Cole Medin What is this? This template provides a complete implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context: Accesses entire documents when needed instead of just chunks Accurate numerical analysis: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights: Connects information across your entire knowledge base Multi-file processing: Handles multiple documents in a single workflow loop Efficient storage: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases --- I do intend on making a local version of this agent very soon!

Growth AIBy Growth AI
1245

Automated news monitoring with Claude 4 AI analysis for Discord & Google News

Who's it for Marketing teams, business intelligence professionals, competitive analysts, and executives who need consistent industry monitoring with AI-powered analysis and automated team distribution via Discord. What it does This intelligent workflow automatically monitors multiple industry topics, scrapes and analyzes relevant news articles using Claude AI, and delivers professionally formatted intelligence reports to your Discord channel. The system provides weekly automated monitoring cycles with personalized bot communication and comprehensive content analysis. How it works The workflow follows a sophisticated 7-phase automation process: Scheduled Activation: Triggers weekly monitoring cycles (default: Mondays at 9 AM) Query Management: Retrieves monitoring topics from centralized Google Sheets configuration News Discovery: Executes comprehensive Google News searches using SerpAPI for each configured topic Content Extraction: Scrapes full article content from top 3 sources per topic using Firecrawl AI Analysis: Processes scraped content using Claude 4 Sonnet for intelligent synthesis and formatting Discord Optimization: Automatically segments content to comply with Discord's 2000-character message limits Automated Delivery: Posts formatted intelligence reports to Discord channel with branded "Claptrap" bot personality Requirements Google Sheets account for query management SerpAPI account for Google News access Firecrawl account for article content extraction Anthropic API access for Claude 4 Sonnet Discord bot with proper channel permissions Scheduled execution capability (cron-based trigger) How to set up Step 1: Configure Google Sheets query management Create monitoring sheet: Set up Google Sheets document with "Query" sheet Add search topics: Include industry keywords, competitor names, and relevant search terms Sheet structure: Simple column format with "Query" header containing search terms Access permissions: Ensure n8n has read access to the Google Sheets document Step 2: Configure API credentials Set up the following credentials in n8n: Google Sheets OAuth2: For accessing query configuration sheet SerpAPI: For Google News search functionality with proper rate limits Firecrawl API: For reliable article content extraction across various websites Anthropic API: For Claude 4 Sonnet access with sufficient token limits Discord Bot API: With message posting permissions in target channel Step 3: Customize scheduling settings Cron expression: Default set to "0 9 1" (Mondays at 9 AM) Frequency options: Adjust for daily, weekly, or custom monitoring cycles Timezone considerations: Configure according to team's working hours Execution timing: Ensure adequate processing time for multiple topics Step 4: Configure Discord integration Set up Discord delivery settings: Guild ID: Target Discord server (currently: 919951151888236595) Channel ID: Specific monitoring channel (currently: 1334455789284364309) Bot permissions: Message posting, embed suppression capabilities Brand personality: Customize "Claptrap" bot messaging style and tone Step 5: Customize content analysis Configure AI analysis parameters: Analysis depth: Currently processes top 3 articles per topic Content format: Structured markdown format with consistent styling Language settings: Currently configured for French output (easily customizable) Quality controls: Error handling for inaccessible articles and content How to customize the workflow Query management expansion Topic categories: Organize queries by industry, competitor, or strategic focus areas Keyword optimization: Refine search terms based on result quality and relevance Dynamic queries: Implement time-based or event-triggered query modifications Multi-language support: Add international keyword variations for global monitoring Advanced content processing Article quantity: Modify from 3 to more articles per topic based on analysis needs Content filtering: Add quality scoring and relevance filtering for article selection Source preferences: Implement preferred publisher lists or source quality weighting Content enrichment: Add sentiment analysis, trend identification, or competitive positioning Discord delivery enhancements Rich formatting: Implement Discord embeds, reactions, or interactive elements Multi-channel distribution: Route different topics to specialized Discord channels Alert levels: Add priority-based messaging for urgent industry developments Archive functionality: Create searchable message threads or database storage Integration expansions Slack compatibility: Replace or supplement Discord with Slack notifications Email reports: Add formatted email distribution for executive summaries Database storage: Implement persistent storage for historical analysis and trending API endpoints: Create webhook endpoints for third-party system integration AI analysis customization Analysis templates: Create topic-specific analysis frameworks and formatting Competitive focus: Enhance competitor mention detection and analysis depth Trend identification: Implement cross-topic trend analysis and strategic insights Summary levels: Create executive summaries alongside detailed technical analysis Advanced monitoring features Intelligent content curation The system provides sophisticated content management: Relevance scoring: Automatic ranking of articles by topic relevance and publication authority Duplicate detection: Prevents redundant coverage of the same story across different sources Content quality assessment: Filters low-quality or promotional content automatically Source diversity: Ensures coverage from multiple perspectives and publication types Error handling and reliability Graceful degradation: Continues processing even if individual articles fail to scrape Retry mechanisms: Automatic retry logic for temporary API failures or network issues Content fallbacks: Uses article snippets when full content extraction fails Notification continuity: Ensures Discord delivery even with partial content processing Results interpretation Intelligence report structure Each monitoring cycle delivers: Topic-specific summaries: Individual analysis for each configured search query Source attribution: Complete citation with publication date, source, and URL Structured formatting: Consistent presentation optimized for quick scanning Professional analysis: AI-generated insights maintaining factual accuracy and business context Performance analytics Monitor system effectiveness through: Processing metrics: Track successful article extraction and analysis rates Content quality: Assess relevance and usefulness of delivered intelligence Team engagement: Monitor Discord channel activity and report utilization System reliability: Track execution success rates and error patterns Use cases Competitive intelligence Market monitoring: Track competitor announcements, product launches, and strategic moves Industry trends: Identify emerging technologies, regulatory changes, and market shifts Partnership tracking: Monitor alliance formations, acquisitions, and strategic partnerships Leadership changes: Track executive movements and organizational restructuring Strategic planning support Market research: Continuous intelligence gathering for strategic decision-making Risk assessment: Early warning system for industry disruptions and regulatory changes Opportunity identification: Spot emerging markets, technologies, and business opportunities Brand monitoring: Track industry perception and competitive positioning Team collaboration enhancement Knowledge sharing: Centralized distribution of relevant industry intelligence Discussion facilitation: Provide common information baseline for strategic discussions Decision support: Deliver timely intelligence for business planning and strategy sessions Competitive awareness: Keep teams informed about competitive landscape changes Workflow limitations Language dependency: Currently optimized for French analysis output (easily customizable) Processing capacity: Limited to 3 articles per query (configurable based on API limits) Platform specificity: Configured for Discord delivery (adaptable to other platforms) Scheduling constraints: Fixed weekly schedule (customizable via cron expressions) Content access: Dependent on article accessibility and website compatibility with Firecrawl API dependencies: Requires active subscriptions and proper rate limit management for all integrated services

Growth AIBy Growth AI
880

Generate website sitemaps & visual trees with Firecrawl and Google Sheets

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Website sitemap generator and visual tree creator Who's it for Web developers, SEO specialists, UX designers, and digital marketers who need to analyze website structure, create visual sitemaps, or audit site architecture for optimization purposes. What it does This workflow automatically generates a comprehensive sitemap from any website URL and creates an organized hierarchical structure in Google Sheets. It follows the website's sitemap to discover all pages, then organizes them by navigation levels (Level 1, Level 2, etc.) with proper parent-child relationships. The output can be further processed to create visual tree diagrams and mind maps. How it works The workflow follows a five-step automation process: URL Input: Accepts website URL via chat interface Site Crawling: Uses Firecrawl to discover all pages following the website's sitemap only Success Validation: Checks if crawling was successful (some sites block external crawlers) Hierarchical Organization: Processes URLs into a structured tree with proper level relationships Google Sheets Export: Creates a formatted spreadsheet with the complete site architecture The system respects robots.txt and follows only sitemap-declared pages to ensure ethical crawling. Requirements Firecrawl API key (for website crawling and sitemap discovery) Google Sheets access Google Drive access (for template duplication) How to set up Step 1: Prepare your template (recommended) It's recommended to create your own copy of the base template: Access the base Google Sheets template Make a copy for your personal use Update the workflow's "Copy template" node with your template's file ID (replace the default ID: 12lV4HwgudgzPPGXKNesIEExbFg09Tuu9gyC_jSS1HjI) This ensures you have control over the template formatting and can customize it as needed Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For crawling websites and discovering sitemaps Google Sheets OAuth2: For creating and updating spreadsheets Google Drive OAuth2: For duplicating the template file Step 3: Configure Firecrawl settings (optional) The workflow uses optimized Firecrawl settings: ignoreSitemap: false - Respects the website's sitemap sitemapOnly: true - Only crawls URLs listed in sitemap files These settings ensure ethical crawling and faster processing Step 4: Access the workflow The workflow uses a chat trigger interface - no manual configuration needed Simply provide the website URL you want to analyze when prompted How to use the workflow Basic usage Start the chat: Access the workflow via the chat interface Provide URL: Enter the website URL you want to analyze (e.g., "https://example.com") Wait for processing: The system will crawl, organize, and export the data Receive your results: Get an automatic direct clickable link to your generated Google Sheets - no need to search for the file Error handling Invalid URLs: If the provided URL is invalid or the website blocks crawling, you'll receive an immediate error message Graceful failure: The workflow stops without creating unnecessary files when errors occur Common causes: Incorrect URL format, robots.txt restrictions, or site security settings File organization Automatic naming: Generated files follow the pattern "[Website URL] - n8n - Arborescence" Google Drive storage: Files are automatically organized in your Google Drive Instant access: Direct link provided immediately upon completion Advanced processing for visual diagrams Step 1: Copy sitemap data Once your Google Sheets is ready: Copy all the hierarchical data from the generated spreadsheet Prepare it for AI processing Step 2: Generate ASCII tree structure Use any AI model with this prompt: Create a hierarchical tree structure from the following website sitemap data. Return ONLY the tree structure using ASCII tree formatting with ├── and └── characters. Do not include any explanations, comments, or additional text - just the pure tree structure. The tree should start with the root domain and show all pages organized by their hierarchical levels. Use proper indentation to show parent-child relationships. Here is the sitemap data: [PASTE THE SITEMAP DATA HERE] Requirements: Use ASCII tree characters (├── └── │) Show clear hierarchical relationships Include all pages from the sitemap Return ONLY the tree structure, no other text Start with the root domain as the top level Step 3: Create visual mind map Visit the Whimsical Diagrams GPT Request a mind map creation using your ASCII tree structure Get a professional visual representation of your website architecture Results interpretation Google Sheets output structure The generated spreadsheet contains: Niv 0 to Niv 5: Hierarchical levels (0 = homepage, 1-5 = navigation depth) URL column: Complete URLs for reference Hyperlinked structure: Clickable links organized by hierarchy Multi-domain support: Handles subdomains and different domain structures Data organization features Automatic sorting: Pages organized by navigation depth and alphabetical order Parent-child relationships: Clear hierarchical structure maintained Domain separation: Main domains and subdomains processed separately Clean formatting: URLs decoded and formatted for readability Workflow limitations Sitemap dependency: Only discovers pages listed in the website's sitemap Crawling restrictions: Some websites may block external crawlers Level depth: Limited to 5 hierarchical levels for clarity Rate limits: Respects Firecrawl API limitations Template dependency: Requires access to the base template for duplication Use cases SEO audits: Analyze site structure for optimization opportunities UX research: Understand navigation patterns and user paths Content strategy: Identify content gaps and organizational issues Site migrations: Document existing structure before redesigns Competitive analysis: Study competitor site architectures Client presentations: Create visual site maps for stakeholder reviews

Growth AIBy Growth AI
784

WhatsApp AI assistant with Claude & GPT4O: multi-format processing & productivity suite

WhatsApp AI Personal Assistant - n8n Workflow Instructions Who's it for This workflow is designed for business professionals, entrepreneurs, and individuals who want to transform their WhatsApp into a powerful AI-powered personal assistant. Perfect for users who need to manage emails, calendar events, document searches, and various productivity tasks through a single messaging interface. What it does This comprehensive n8n workflow creates an intelligent WhatsApp bot that can process multiple message types (text, voice, images, PDF documents) and execute complex tasks using integrated tools including Gmail, Google Calendar, Google Drive, Airtable, Discord, and internet search capabilities. The assistant maintains conversation context and can handle sophisticated requests through natural language processing. How it works Phase 1: Message Reception and Classification The workflow begins when a message is received through the WhatsApp Trigger. A Switch node automatically classifies the incoming message type (text, audio, image, or document) and routes it to the appropriate processing pathway. Phase 2: Content Processing by Format Text Messages: Direct extraction and formatting for AI processing Voice Messages: Retrieves audio URL from WhatsApp API Downloads audio file with authenticated requests Transcribes speech to text using OpenAI Whisper Formats transcribed content for AI agent Images: Downloads image from WhatsApp API Analyzes visual content using GPT-4O-mini vision model Generates detailed French descriptions covering composition, objects, people, and atmosphere Combines user requests with AI analysis PDF Documents: Validates file format (rejects non-PDF files) Downloads and extracts text content Processes document text for AI analysis Phase 3: AI Assistant Processing The processed content is handled by a Claude Sonnet 4-powered agent with access to: SerpAPI for internet searches Airtable database for email contact management Gmail integration for email operations Google Calendar for event scheduling and management Google Drive for document searches Discord messaging for notifications Calculator for mathematical operations PostgreSQL chat memory for conversation context Phase 4: Response Delivery The system intelligently determines response format: For voice inputs: Converts AI response to speech using OpenAI TTS For other inputs: Sends text responses directly Handles technical requirements like MIME type compatibility for WhatsApp Requirements API Credentials Required: WhatsApp Business API (Trigger and messaging) OpenAI API (GPT-4O-mini, Whisper, TTS) Anthropic API (Claude Sonnet 4) Google APIs (Gmail, Calendar, Drive OAuth2) Airtable API (Database operations) Discord Bot API (Messaging) SerpAPI (Internet search) PostgreSQL Database (Conversation memory) Self-hosted n8n Instance This workflow requires a self-hosted n8n installation as it uses community nodes and advanced integrations not available in n8n Cloud. How to set up Prerequisites Setup Deploy n8n on a server with public access Obtain WhatsApp Business API credentials Create developer accounts for all required services Set up a PostgreSQL database for conversation memory Credential Configuration Configure the following credentials in n8n: WhatsApp API credentials for both trigger and messaging nodes OpenAI API key with access to GPT-4O-mini, Whisper, and TTS Anthropic API key for Claude Sonnet 4 Google OAuth2 credentials for Gmail, Calendar, and Drive Airtable Personal Access Token Discord Bot token SerpAPI key PostgreSQL database connection WhatsApp Configuration Configure webhook URLs in WhatsApp Business API settings Set up phone number verification Configure message templates if required Tool Configuration Airtable: Set up email database with 'Nom' and 'Mails' columns Google Calendar: Configure calendar access permissions Google Drive: Set up appropriate folder permissions Discord: Configure bot permissions and channel access Testing and Validation Test each message type (text, audio, image, PDF) Verify all tool integrations work correctly Test conversation memory persistence Validate response delivery in both text and audio formats How to customize the workflow Modify AI Assistant Personality Edit the system message in the "Agent personnel" node to customize the assistant's behavior, tone, and capabilities according to your needs. Add New Tools Integrate additional n8n tool nodes to extend functionality: CRM systems (Salesforce, HubSpot) Project management tools (Notion, Trello) File storage services (Dropbox, OneDrive) Communication platforms (Slack, Microsoft Teams) Customize Content Processing Modify image analysis prompts for specific use cases Add document format support beyond PDF Implement content filtering or moderation Add language detection and multi-language support Enhance Memory and Context Implement user-specific memory sessions Add conversation summaries for long interactions Create user preference storage Implement conversation analytics Response Customization Add multimedia response capabilities Implement response templates for common queries Add typing indicators or read receipts Create custom response formatting Security Enhancements Implement user authentication Add rate limiting for API calls Create audit logs for sensitive operations Implement data encryption for stored conversations Performance Optimization Add caching for frequently accessed data Implement queue management for high-volume usage Add error handling and retry mechanisms Create monitoring and alerting systems Important Notes This workflow processes sensitive data; ensure proper security measures are in place Monitor API usage limits across all integrated services Regularly backup conversation memory data Test thoroughly before deploying to production Consider implementing user access controls for business environments Keep all API credentials secure and rotate them regularly Troubleshooting Audio Issues: Verify MIME type handling in the "Fix mimeType for Audio" node WhatsApp Delivery: Check webhook configurations and phone number verification Tool Failures: Validate all API credentials and permissions Memory Issues: Monitor PostgreSQL database performance and storage Response Delays: Optimize tool timeout settings and add proper error handling

Growth AIBy Growth AI
777

Automated Google Ads campaign reporting to Google Sheets with Airtable

Google Ads automated reporting to spreadsheets with Airtable Who's it for Digital marketing agencies, PPC managers, and marketing teams who manage multiple Google Ads accounts and need automated monthly performance reporting organized by campaign types and conversion metrics. What it does This workflow automatically retrieves Google Ads performance data from multiple client accounts and populates organized spreadsheets with campaign metrics. It differentiates between e-commerce (conversion value) and lead generation (conversion count) campaigns, then organizes data by advertising channel (Performance Max, Search, Display, etc.) with monthly tracking for budget and performance analysis. How it works The workflow follows an automated data collection and reporting process: Account Retrieval: Fetches client information from Airtable (project names, Google Ads IDs, campaign types) Active Filter: Processes only accounts marked as "Actif" for budget reporting Campaign Classification: Routes accounts through e-commerce or lead generation workflows based on "Typologie ADS" Google Ads Queries: Executes different API calls depending on campaign type (conversion value vs. conversion count) Data Processing: Organizes metrics by advertising channel (Performance Max, Search, Display, Video, Shopping, Demand Gen) Dynamic Spreadsheet Updates: Automatically fills the correct monthly column in client spreadsheets Sequential Processing: Handles multiple accounts with wait periods to avoid API rate limits Requirements Airtable account with client database Google Ads API access with developer token Google Sheets API access Client-specific spreadsheet templates (provided) How to set up Step 1: Prepare your reporting template Copy the Google Sheets reporting template Create individual copies for each client Ensure proper column structure (months B-M for January-December) Link template URLs in your Airtable database Step 2: Configure your Airtable database Set up the following fields in your Airtable: Project names: Client project identifiers ID GADS: Google Ads customer IDs Typologie ADS: Campaign classification ("Ecommerce" or "Lead") Status - Prévisionnel budgétaire: Account status ("Actif" for active accounts) Automation budget: URLs to client-specific reporting spreadsheets Step 3: Set up API credentials Configure the following authentication: Airtable Personal Access Token: For client database access Google Ads OAuth2: For advertising data retrieval Google Sheets OAuth2: For spreadsheet updates Developer Token: Required for Google Ads API access Login Customer ID: Manager account identifier Step 4: Configure Google Ads API settings Update the HTTP request nodes with your credentials: Developer Token: Replace "[Your token]" with your actual developer token Login Customer ID: Replace "[Your customer id]" with your manager account ID API Version: Currently using v18 (update as needed) Step 5: Set up scheduling Default schedule: Runs on the 3rd of each month at 5 AM Cron expression: 0 5 3 Recommended timing: Early month execution for complete previous month data Processing delay: 1-minute waits between accounts to respect API limits How to customize the workflow Campaign type customization E-commerce campaigns: Tracks: Cost and conversion value metrics Query: metrics.conversions_value for revenue tracking Use case: Online stores, retail businesses Lead generation campaigns: Tracks: Cost and conversion count metrics Query: metrics.conversions for lead quantity Use case: Service businesses, B2B companies Advertising channel expansion Current channels tracked: Performance Max: Automated campaign type Search: Text ads on search results Display: Visual ads on partner sites Video: YouTube and video partner ads Shopping: Product listing ads Demand Gen: Audience-focused campaigns Add new channels by modifying the data processing code nodes. Reporting period adjustment Current setting: Last month data (DURING LAST_MONTH) Alternative periods: Last 30 days, specific date ranges, quarterly reports Custom timeframes: Modify the Google Ads query date parameters Multi-account management Sequential processing: Handles multiple accounts automatically Error handling: Continues processing if individual accounts fail Rate limiting: Built-in waits prevent API quota issues Batch size: No limit on number of accounts processed Data organization features Dynamic monthly columns Automatic detection: Determines previous month column (B-M) Column mapping: January=B, February=C, ..., December=M Data placement: Updates correct month automatically Multi-year support: Handles year transitions seamlessly Campaign performance breakdown Each account populates 10 rows of data: Performance Max Cost (Row 2) Performance Max Conversions/Value (Row 3) Demand Gen Cost (Row 4) Demand Gen Conversions/Value (Row 5) Search Cost (Row 6) Search Conversions/Value (Row 7) Video Cost (Row 8) Video Conversions/Value (Row 9) Shopping Cost (Row 10) Shopping Conversions/Value (Row 11) Data processing logic Cost conversion: Automatically converts micros to euros (÷1,000,000) Precision rounding: Rounds to 2 decimal places for clean presentation Zero handling: Shows 0 for campaign types with no activity Data validation: Handles missing or null values gracefully Results interpretation Monthly performance tracking Historical data: Year-over-year comparison across all channels Channel performance: Identify best-performing advertising types Budget allocation: Data-driven decisions for campaign investments Trend analysis: Month-over-month growth or decline patterns Account-level insights Multi-client view: Consolidated reporting across all managed accounts Campaign diversity: Understanding which channels clients use most Performance benchmarks: Compare similar account types and industries Resource allocation: Focus on high-performing accounts and channels Use cases Agency reporting automation Client dashboards: Automated population of monthly performance reports Budget planning: Historical data for next month's budget recommendations Performance reviews: Ready-to-present data for client meetings Trend identification: Spot patterns across multiple client accounts Internal performance tracking Team productivity: Track account management efficiency Campaign optimization: Identify underperforming channels for improvement Growth analysis: Monitor client account growth and expansion Forecasting: Use historical data for future performance predictions Strategic planning Budget allocation: Data-driven distribution across advertising channels Channel strategy: Determine which campaign types to emphasize Client retention: Proactive identification of declining accounts New business: Performance data to support proposals and pitches Workflow limitations Monthly execution: Designed for monthly reporting (not real-time) API dependencies: Requires stable Google Ads and Sheets API access Rate limiting: Sequential processing prevents parallel account handling Template dependency: Requires specific spreadsheet structure for proper data placement Previous month focus: Optimized for completed month data (run early in new month) Manual credential setup: Requires individual configuration of API tokens and customer IDs

Growth AIBy Growth AI
725

Automated task tracking & notifications with Motion and Airtable

Automated project status tracking with Airtable and Motion Who's it for Project managers, team leads, and agencies who need to automatically monitor project completion status across multiple clients and send notifications when specific milestones are reached. What it does This workflow automatically tracks project progress by connecting Airtable project databases with Motion task management. It monitors specific tasks within active projects and triggers email notifications when key milestones are completed. The system is designed to handle multiple projects simultaneously and can be customized for various notification triggers. How it works The workflow follows a structured monitoring process: Data Retrieval: Fetches project information from Airtable (project names and Motion workspace IDs) Motion Integration: Connects to Motion API using HTTP requests to retrieve project details Project Filtering: Identifies only active projects with "Todo" status containing "SEO" in the name Task Monitoring: Checks for specific completed tasks (e.g., "Intégrer les articles de blog") Conditional Notifications: Sends email alerts only when target tasks are marked as "Completed" Database Updates: Updates Airtable with last notification timestamps Requirements Airtable account with project database Motion account with API access Gmail account for email notifications HTTP request authentication for Motion API How to set up Step 1: Configure your Airtable database Ensure your Airtable contains the following fields: Project names: Names of projects to monitor Motion Workspace ID: Workspace identifiers for Motion API calls Status - Calendrier éditorial: Project status field (set to "Actif" for active projects) Last sent - Calendrier éditorial: Timestamp tracking for notification frequency Email addresses: Client and team member contact information Step 2: Set up API credentials Configure the following authentication in n8n: Airtable Personal Access Token: For database access Motion API: HTTP header authentication for Motion integration Gmail OAuth2: For email notification sending Step 3: Configure Motion API integration Base URL: Uses Motion API v1 endpoints Project retrieval: Fetches projects using workspace ID parameter Task monitoring: Searches for specific task names and completion status Custom filtering: Targets projects with "SEO" in name and "Todo" status Step 4: Customize scheduling Default schedule: Runs daily between 10th-31st of each month at 8 AM Cron expression: 0 8 10-31 (modify as needed) Frequency options: Can be adjusted for weekly, daily, or custom intervals Step 5: Set up email notifications Configure Gmail settings: Recipients: Project managers, clients, and collaborators Subject line: Dynamic formatting with project name and month Message template: HTML-formatted email with professional signature Sender name: Customizable organization name How to customize the workflow Single project, multiple tasks monitoring To adapt for monitoring one project with several different tasks: Modify the filter conditions to target your specific project Add multiple HTTP requests for different task names Create conditional branches for each task type Set up different notification templates per task Multi-project customization Database fields: Add custom fields in Airtable for different project types Filtering logic: Modify conditions to match your project categorization Motion workspace: Support multiple workspaces per client Notification rules: Set different notification frequencies per project Alternative notification methods Replace or complement Gmail with: Slack notifications: Send updates to team channels Discord integration: Alert development teams SMS notifications: Urgent milestone alerts Webhook integrations: Connect to custom internal systems Teams notifications: Enterprise communication Task monitoring variations Multiple task types: Monitor different milestones (design, development, testing) Task dependencies: Check completion of prerequisite tasks Progress tracking: Monitor task progress percentages Deadline monitoring: Alert on approaching deadlines Conditional logic features Smart filtering system Active project detection: Only processes projects marked as "Actif" Date-based filtering: Prevents duplicate notifications using timestamp comparison Status verification: Confirms task completion before sending notifications Project type filtering: Targets specific project categories (SEO projects in this example) Notification frequency control Monthly notifications: Prevents spam by tracking last sent dates Conditional execution: Only sends emails when tasks are actually completed Database updates: Automatically records notification timestamps Loop management: Processes multiple projects sequentially Results interpretation Automated monitoring outcomes Project status tracking: Real-time monitoring of active projects Milestone notifications: Immediate alerts when key tasks complete Database synchronization: Automatic updates of notification records Team coordination: Ensures all stakeholders are informed of progress Email notification content Each notification includes: Project identification: Clear project name and context Completion confirmation: Specific task that was completed Calendar reference: Links to editorial calendars or project resources Professional formatting: Branded email template with company signature Action items: Clear next steps for recipients Use cases Agency project management Client deliverable tracking: Monitor when content is ready for client review Milestone notifications: Alert teams when phases complete Quality assurance: Ensure all deliverables meet completion criteria Client communication: Automated updates on project progress Editorial workflow management Content publication: Track when articles are integrated into websites Editorial calendar: Monitor content creation and publication schedules Team coordination: Notify writers, editors, and publishers of status changes Client approval: Alert clients when content is ready for review Development project tracking Feature completion: Monitor when development milestones are reached Testing phases: Track QA completion and deployment readiness Client delivery: Automate notifications for UAT and launch phases Team synchronization: Keep all stakeholders informed of progress Workflow limitations Motion API dependency: Requires stable Motion API access and proper authentication Single task monitoring: Currently tracks one specific task type per execution Email-only notifications: Default setup uses Gmail (easily expandable) Monthly frequency: Designed for monthly notifications (customizable) Project naming dependency: Filters based on specific naming conventions Manual configuration: Requires setup for each new project type or workspace

Growth AIBy Growth AI
451

Monitor & filter French procurement tenders with BOAMP API and Google Sheets

French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config - Your configuration settings Offset - Pagination tracking (managed automatically) All - Complete tender database Target - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.

Growth AIBy Growth AI
434

Generate SEO content with Claude AI & competitor analysis using Apify

SEO Content Generation Workflow (Basic Version) - n8n Template Instructions Who's it for This workflow is designed for SEO professionals, content marketers, digital agencies, and businesses who need to generate optimized meta tags, H1 headings, and content briefs at scale. Perfect for teams managing multiple clients or large keyword lists who want to automate competitor analysis and SEO content creation without the complexity of vector databases. How it works The workflow automates the entire SEO content creation process by analyzing your target keywords against top competitors, then generating optimized meta elements and comprehensive content briefs. It uses AI-powered analysis combined with real competitor data to create SEO-friendly content that's tailored to your specific business context. The system processes keywords in batches, performs Google searches, scrapes competitor content, analyzes heading structures, and generates personalized SEO content using your company information for maximum relevance. Requirements Required Services and Credentials Google Sheets API: For reading configuration and updating results Anthropic API: For AI content generation (Claude Sonnet 4) Apify API: For Google search results Firecrawl API: For competitor website scraping Template Spreadsheet Copy this template spreadsheet and configure it with your information: Template Link How to set up Step 1: Copy and Configure Template Make a copy of the template spreadsheet Fill in the Client Information sheet: Client name: Your company or client's name Client information: Brief business description URL: Website address Tone of voice: Content style preferences Restrictive instructions: Topics or approaches to avoid Complete the SEO sheet with your target pages: Page: Page you're optimizing (e.g., "Homepage", "Product Page") Keyword: Main search term to target Awareness level: User familiarity with your business Page type: Category (homepage, blog, product page, etc.) Step 2: Import Workflow Import the n8n workflow JSON file Configure all required API credentials in n8n: Google Sheets OAuth2 Anthropic API key Apify API key Firecrawl API key Step 3: Test Configuration Activate the workflow Send your Google Sheets URL to the chat trigger Verify that all sheets are readable and credentials work Test with a single keyword row first Workflow Process Overview Phase 0: Setup and Configuration Copy template spreadsheet Configure client information and SEO parameters Set up API credentials in n8n Phase 1: Data Input and Processing Chat trigger receives Google Sheets URL System reads client configuration and SEO data Filters valid keywords and empty H1 fields Initiates batch processing Phase 2: Competitor Research and Analysis Searches Google for top 10 results per keyword using Apify Scrapes first 5 competitor websites using Firecrawl Extracts heading structures (H1-H6) from competitor pages Analyzes competitor meta tags and content organization Processes markdown content to identify heading hierarchies Phase 3: Meta Tags and H1 Generation AI analyzes keyword context and competitor data using Claude Incorporates client information for personalization Generates optimized meta title (65 characters maximum) Creates compelling meta description (165 characters maximum) Produces user-focused H1 (70 characters maximum) Uses structured output parsing for consistent formatting Phase 4: Content Brief Creation Analyzes search intent percentages (informational, transactional, navigational) Develops content strategy based on competitor analysis Creates detailed MECE page structure with H2 and H3 sections Suggests rich media elements (images, videos, infographics, tables) Provides writing recommendations and detail level scoring (1-10 scale) Ensures SEO optimization while maintaining user relevance Phase 5: Data Integration and Updates Combines all generated content into unified structure Updates Google Sheets with new SEO elements Preserves existing data while adding new content Continues batch processing for remaining keywords Key Differences from Advanced Version This basic version focuses on core SEO functionality without additional complexity: No Vector Database: Removes Supabase integration for simpler setup Streamlined Architecture: Fewer dependencies and configuration steps Essential Features Only: Core competitor analysis and content generation Faster Setup: Reduced time to deployment Lower Costs: Fewer API services required How to customize the workflow Adjusting AI Models Replace Anthropic Claude with other LLM providers in the agent nodes Modify system prompts for different content styles or languages Adjust character limits for meta elements in the structured output parser Modifying Competitor Analysis Change number of competitors analyzed (currently 5) by adding/removing Scrape nodes Adjust scraping parameters in Firecrawl nodes for different content types Modify heading extraction logic in JavaScript Code nodes Customizing Output Format Update Google Sheets column mapping in the final Code node Modify structured output parser schema for different data structures Change batch processing size in Split in Batches node Adding Quality Controls Insert validation nodes between workflow phases Add error handling and retry logic to critical nodes Implement content quality scoring mechanisms Extending Functionality Add keyword research capabilities with additional APIs Include image optimization suggestions Integrate social media content generation Connect to CMS platforms for direct publishing Best Practices Setup and Testing Always test with small batches before processing large keyword lists Monitor API usage and costs across all services Regularly update system prompts based on output quality Maintain clean data in your Google Sheets template Content Quality Review generated content before publishing Customize system prompts to match your brand voice Use descriptive node names for easier workflow maintenance Keep competitor analysis current by running regularly Performance Optimization Process keywords in small batches to avoid timeouts Set appropriate retry policies for external API calls Monitor workflow execution times and optimize bottlenecks Troubleshooting Common Issues and Solutions API Errors Check credential configuration in n8n settings Verify API usage limits and billing status Ensure proper authentication for each service Scraping Failures Firecrawl nodes have error handling enabled to continue on failures Some websites may block scraping - this is normal behavior Check if competitor URLs are accessible and valid Empty Results Verify keyword formatting in Google Sheets Ensure competitor websites contain the expected content structure Check if meta tags are properly formatted in system prompts Sheet Update Errors Ensure proper column mapping in final Code node Verify Google Sheets permissions and sharing settings Check that target sheet names match exactly Processing Stops Review batch processing limits and timeout settings Check for errors in individual nodes using execution logs Verify all required fields are populated in input data Template Structure Required Sheets Client Information: Business details and configuration SEO: Target keywords and page information Results Sheet: Where generated content will be written Expected Columns Keywords: Target search terms Description: Brief page description Type de page: Page category Awareness level: User familiarity level title, meta-desc, h1, brief: Generated output columns This streamlined version provides all essential SEO content generation capabilities while being easier to set up and maintain than the advanced version with vector database integration.

Growth AIBy Growth AI
429