4 templates found
Category:
Author:
Sort:

Generate product videos automatically with Gemini, FAL and Google Workspace

๐Ÿ“„ What this workflow does This workflow turns your n8n into an automated product-video generator powered by Google Sheets. When a new row is added with status = run, it: Downloads the product image from Google Drive. Converts the image to base64 and sends it to Gemini, which creates a branded ad-style variant. Saves the generated image back into a designated Google Drive folder. Sends the image to FAL (image-to-video) to generate a short promotional video clip. Polls FALโ€™s response_url until the video is ready. Uploads the video to Google Drive (videos folder). Updates the original Google Sheet row with the video link and sets status = finished. Handles API latency via wait/polling and logs failures into the sheet if needed. ๐Ÿ‘ค Who is this for Marketing teams automating creative asset production. E-commerce businesses needing quick product promo videos. Agencies creating branded ad content at scale. โœ… Requirements An n8n instance. A Google Sheet with at least these columns: STT, linkimage, note, status, linkvideo. Google Sheets & Google Drive OAuth2 credentials connected in n8n. Gemini API key (for ad-style image generation). FAL API key (for image-to-video). โš™๏ธ How to set up Import the provided workflow JSON into n8n. Connect Google Sheets credentials and point to your sheet (documentId + gid). Connect Google Drive credentials and update folder IDs in the two Upload File nodes (images/videos). Add Gemini and FAL API keys in the respective HTTP Request headers (via Credentials). Test: add a row with link_image, note, and status = run. The workflow should generate and save a video, then update the sheet with the link. ๐Ÿ” How it works Trigger โ†’ Google Sheets Trigger fires on rowAdded where status = run. Pre-processing โ†’ Download the product image from Google Drive โ†’ extract base64. LLM Image Generation โ†’ Gemini generates an ad-style variant based on note. Storage โ†’ Upload the generated image into the โ€œimagesโ€ Drive folder. Video Creation โ†’ FAL converts the branded image into a short video. Polling โ†’ Wait node + HTTP Request check job status until video is completed. Write-back โ†’ Upload final video into the โ€œvideosโ€ Drive folder, update the sheet with the link_video, and set status = finished.

Cong NguyenBy Cong Nguyen
702

Qualify leads with Salesforce, Explorium data & Claude AI analysis of API usage

Inbound Agent - AI-Powered Lead Qualification with Product Usage Intelligence This n8n workflow automatically qualifies and scores inbound leads by combining their product usage patterns with deep company intelligence. The workflow pulls new leads from your CRM, analyzes which API endpoints they've been testing, enriches them with firmographic data, and generates comprehensive qualification reports with personalized talking pointsโ€”giving your sales team everything they need to prioritize and convert high-quality leads. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Salesforce Type: OAuth2 or Username/Password Used for: Pulling lead reports and creating follow-up tasks Alternative CRM options: HubSpot, Zoho, Pipedrive Get credentials at Salesforce Setup Databricks (or Analytics Platform) Type: HTTP Request with Bearer Token Header: Authorization Value: Bearer YOURDATABRICKSTOKEN Used for: Querying product usage and API endpoint data Alternative options: Datadog, Mixpanel, Amplitude, custom data warehouse Explorium API Type: Generic Header Auth Header: Authorization Value: Bearer YOURAPIKEY Used for: Business matching and firmographic enrichment Get your API key at Explorium Dashboard Explorium MCP Type: HTTP Header Auth Used for: Real-time company intelligence and supplemental research Connect to: https://mcp.explorium.ai/mcp Anthropic API Type: API Key Used for: AI-powered lead qualification and analysis Get your API key at Anthropic Console Go to Settings โ†’ Credentials, create these credentials, and assign them in the respective nodes before running the workflow. --- Workflow Overview Node 1: When clicking 'Execute workflow' Manual trigger that initiates the lead qualification process. Type: Manual Trigger Purpose: On-demand execution for testing or manual runs Alternative Trigger Options: Schedule Trigger: Run automatically (hourly, daily, weekly) Webhook: Trigger on CRM updates or new lead events CRM Trigger: Real-time activation when leads are created Node 2: GET SF Report Pulls lead data from a pre-configured Salesforce report. Method: GET Endpoint: Salesforce Analytics Reports API Authentication: Salesforce OAuth2 Returns: Raw Salesforce report data including: Lead contact information Company names Lead source and status Created dates Custom fields CRM Alternatives: This node can be replaced with HubSpot, Zoho, or any CRM's reporting API. Node 3: Extract Records Parses the Salesforce report structure and extracts individual lead records. Extraction Logic: Navigates report's factMap['T!T'].rows structure Maps data cells to named fields Node 4: Extract Tenant Names Prepares tenant identifiers for usage data queries. Purpose: Formats tenant names as SQL-compatible strings for the Databricks query Output: Comma-separated, quoted list: 'tenant1', 'tenant2', 'tenant3' Node 5: Query Databricks Queries your analytics platform to retrieve API usage data for each lead. Method: POST Endpoint: /api/2.0/sql/statements Authentication: Bearer token in headers Warehouse ID: Your Databricks cluster ID Platform Alternatives: Datadog: Query logs via Logs API Mixpanel: Event segmentation API Amplitude: Behavioral cohorts API Custom Warehouse: PostgreSQL, Snowflake, BigQuery queries Node 6: Split Out Splits the Databricks result array into individual items for processing. Field: result.data_array Purpose: Transform single response with multiple rows into separate items Node 7: Rename Keys Normalizes column names from database query to readable field names. Mapping: 0 โ†’ TenantNames 1 โ†’ endpoints 2 โ†’ endpointsNum Node 8: Extract Business Names Prepares company names for Explorium enrichment. Node 9: Loop Over Items Iterates through each company for individual enrichment. Node 10: Explorium API: Match Businesses Matches company names to Explorium's business entity database. Method: POST Endpoint: /v1/businesses/match Authentication: Header Auth (Bearer token) Returns: business_id: Unique Explorium identifier matched_businesses: Array of potential matches Match confidence scores Node 11: Explorium API: Firmographics Enriches matched businesses with comprehensive company data. Method: POST Endpoint: /v1/businesses/firmographics/bulk_enrich Authentication: Header Auth (Bearer token) Returns: Company name, website, description Industry categories (NAICS, SIC, LinkedIn) Size: employee count range, revenue range Location: headquarters address, city, region, country Company age and founding information Social profiles: LinkedIn, Twitter Logo and branding assets Node 12: Merge Combines API usage data with firmographic enrichment data. Node 13: Organize Data as Items Structures merged data into clean, standardized lead objects. Data Organization: Maps API usage by tenant name Maps enrichment data by company name Combines with original lead information Creates complete lead profile for analysis Node 14: Loop Over Items1 Iterates through each qualified lead for AI analysis. Batch Size: 1 (analyzes leads individually) Purpose: Generate personalized qualification reports Node 15: Get many accounts1 Fetches the associated Salesforce account for context. Resource: Account Operation: Get All Filter: Match by company name Limit: 1 record Purpose: Link lead qualification back to Salesforce account for task creation Node 16: AI Agent Analyzes each lead to generate comprehensive qualification reports. Input Data: Lead contact information API usage patterns (which endpoints tested) Firmographic data (company profile) Lead source and status Analysis Process: Evaluates lead quality based on usage, company fit, and signals Identifies which Explorium APIs the lead explored Assesses company size, industry, and potential value Detects quality signals (legitimate company email, active usage) and red flags Determines optimal sales approach and timing Connected to Explorium MCP for supplemental company research if needed Output: Structured qualification report with: Lead Score: High Priority, Medium Priority, Low Priority, or Nurture Quick Summary: Executive overview of lead potential API Usage Analysis: Endpoints used, usage insights, potential use case Company Profile: Overview, fit assessment, potential value Quality Signals: Positive indicators and concerns Recommended Actions: Next steps, timing, and approach Talking Points: Personalized conversation starters based on actual API usage Node 18: Clean Outputs Formats the AI qualification report for Salesforce task creation. Node 19: Update Salesforce Records Creates follow-up tasks in Salesforce with qualification intelligence. Resource: Task Operation: Create Authentication: Salesforce OAuth2 Alternative Output Options: HubSpot: Create tasks or update deal stages Outreach/SalesLoft: Add to sequences with custom messaging Slack: Send qualification reports to sales channels Email: Send reports to account owners Google Sheets: Log qualified leads for tracking --- Workflow Flow Summary Trigger: Manual execution or scheduled run Pull Leads: Fetch new/updated leads from Salesforce report Extract: Parse lead records and tenant identifiers Query Usage: Retrieve API endpoint usage data from analytics platform Prepare: Format data for enrichment Match: Identify companies in Explorium database Enrich: Pull comprehensive firmographic data Merge: Combine usage patterns with company intelligence Organize: Structure complete lead profiles Analyze: AI evaluates each lead with quality scoring Format: Structure qualification reports for CRM Create Tasks: Automatically populate Salesforce with actionable intelligence This workflow eliminates manual lead research and qualification, automatically analyzing product engagement patterns alongside company fit to help sales teams prioritize and personalize their outreach to the highest-value inbound leads. --- Customization Options Flexible Triggers Replace the manual trigger with: Schedule: Run hourly/daily to continuously qualify new leads Webhook: Real-time qualification when leads are created CRM Trigger: Activate on specific lead status changes Analytics Platform Integration The Databricks query can be adapted for: Datadog: Query application logs and events Mixpanel: Analyze user behavior and feature adoption Amplitude: Track product engagement metrics Custom Databases: PostgreSQL, MySQL, Snowflake, BigQuery CRM Flexibility Works with multiple CRMs: Salesforce: Full integration (pull reports, create tasks) HubSpot: Contact properties and deal updates Zoho: Lead enrichment and task creation Pipedrive: Deal qualification and activity creation Enrichment Depth Add more Explorium endpoints: Technographics: Tech stack and product usage News & Events: Recent company announcements Funding Data: Investment rounds and financial events Hiring Signals: Job postings and growth indicators Output Destinations Route qualification reports to: CRM Updates: Salesforce, HubSpot (update lead scores/fields) Task Creation: Any CRM task/activity system Team Notifications: Slack, Microsoft Teams, Email Sales Tools: Outreach, SalesLoft, Salesloft sequences Reporting: Google Sheets, Data Studio dashboards AI Model Options Swap AI providers: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini --- Setup Notes Salesforce Report Configuration: Create a report with required fields (name, email, company, tenant ID) and use its API endpoint Tenant Identification: Ensure your product usage data includes identifiers that link to CRM leads Usage Data Query: Customize the SQL query to match your database schema and table structure MCP Configuration: Explorium MCP requires Header Authโ€”configure credentials properly Lead Scoring Logic: Adjust AI system prompts to match your ideal customer profile and qualification criteria Task Assignment: Configure Salesforce task assignment rules or add logic to route to specific sales reps This workflow acts as an intelligent lead qualification system that combines behavioral signals (what they're testing) with firmographic fit (who they are) to give sales teams actionable intelligence for every inbound lead.

exploriumBy explorium
109

Job post to sales lead pipeline with Scrape.do, Apollo.io & OpenAI

Lead Sourcing by Job Posts For Outreach With Scrape.do API & Open AI & Google Sheets Overview This n8n workflow automates the complete lead generation process by scraping job postings from Indeed, enriching company data via Apollo.io, identifying decision-makers, and generating personalized LinkedIn outreach messages using OpenAI. It integrates with Scrape.do for reliable web scraping, Apollo.io for B2B data enrichment, OpenAI for AI-powered personalization, and Google Sheets for centralized data storage. Perfect for: Sales teams, recruiters, business development professionals, and marketing agencies looking to automate their outbound prospecting pipeline. --- Workflow Components โฐ Schedule Trigger | Property | Value | |----------|-------| | Type | Schedule Trigger | | Purpose | Automatically initiates workflow on a recurring schedule | | Frequency | Weekly (Every Monday) | | Time | 00:00 UTC | Function: Ensures consistent, hands-off lead generation by running the pipeline automatically without manual intervention. --- ๐Ÿ” Scrape.do Indeed API | Property | Value | |----------|-------| | Type | HTTP Request (GET) | | Purpose | Scrapes job listings from Indeed via Scrape.do proxy API | | Endpoint | https://api.scrape.do | | Output Format | Markdown | Request Parameters: | Parameter | Value | Description | |-----------|-------|-------------| | token | API Token | Scrape.do authentication | | url | Indeed Search URL | Target job search page | | super | true | Uses residential proxies | | geoCode | us | US-based content | | render | true | JavaScript rendering enabled | | device | mobile | Mobile viewport for cleaner HTML | | output | markdown | Lightweight text output | Function: Fetches Indeed job listings with anti-bot bypass, returning clean markdown for easy parsing. --- ๐Ÿ“‹ Parse Indeed Jobs | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Extracts structured job data from markdown | | Mode | Run once for all items | Extracted Fields: | Field | Description | Example | |-------|-------------|---------| | jobTitle | Position title | "Senior Data Engineer" | | jobUrl | Indeed job link | "https://indeed.com/viewjob?jk=abc123" | | jobId | Indeed job identifier | "abc123" | | companyName | Hiring company | "Acme Corporation" | | location | City, State | "San Francisco, CA" | | salary | Pay range | "$120,000 - $150,000" | | jobType | Employment type | "Full-time" | | source | Data source | "Indeed" | | dateFound | Scrape date | "2025-01-15" | Function: Parses markdown using regex patterns, filters invalid entries, and deduplicates by company name. --- ๐Ÿ“Š Add New Company (Google Sheets) | Property | Value | |----------|-------| | Type | Google Sheets Node | | Purpose | Stores parsed job postings for tracking | | Operation | Append rows | | Target Sheet | "Add New Company" | Function: Creates a historical record of all discovered job postings and companies for pipeline tracking. --- ๐Ÿข Apollo Organization Search | Property | Value | |----------|-------| | Type | HTTP Request (POST) | | Purpose | Enriches company data via Apollo.io API | | Endpoint | https://api.apollo.io/v1/organizations/search | | Authentication | HTTP Header Auth (x-api-key) | Request Body: json { "qorganizationname": "Company Name", "page": 1, "per_page": 1 } Response Fields: | Field | Description | |-------|-------------| | id | Apollo organization ID | | name | Official company name | | website_url | Company website | | linkedin_url | LinkedIn company page | | industry | Business sector | | estimatednumemployees | Company size | | founded_year | Year established | | city, state, country | Location details | | short_description | Company overview | Function: Retrieves comprehensive company intelligence including LinkedIn profiles, industry classification, and employee count. --- ๐Ÿ“ค Extract Apollo Org Data | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Parses Apollo response and merges with original data | | Mode | Run once for each item | Function: Extracts relevant fields from Apollo API response and combines with job posting data for downstream processing. --- ๐Ÿ‘ฅ Apollo People Search | Property | Value | |----------|-------| | Type | HTTP Request (POST) | | Purpose | Finds decision-makers at target companies | | Endpoint | https://api.apollo.io/v1/mixed_people/search | | Authentication | HTTP Header Auth (x-api-key) | Request Body: json { "organizationids": ["apolloorg_id"], "person_titles": [ "CTO", "Chief Technology Officer", "VP Engineering", "Head of Engineering", "Engineering Manager", "Technical Director", "CEO", "Founder" ], "page": 1, "per_page": 3 } Response Fields: | Field | Description | |-------|-------------| | first_name | Contact first name | | last_name | Contact last name | | title | Job title | | email | Email address | | linkedin_url | LinkedIn profile URL | | phone_number | Direct phone | Function: Identifies key stakeholders and decision-makers based on configurable title filters. --- ๐Ÿ“ Format Leads | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Structures lead data for outreach | | Mode | Run once for all items | Function: Combines person data with company context, creating comprehensive lead profiles ready for personalization. --- ๐Ÿค– Generate Personalized Message (OpenAI) | Property | Value | |----------|-------| | Type | OpenAI Node | | Purpose | Creates custom LinkedIn connection messages | | Model | gpt-4o-mini | | Max Tokens | 150 | | Temperature | 0.7 | System Prompt: You are a professional outreach specialist. Write personalized LinkedIn connection request messages. Keep messages under 300 characters. Be friendly, professional, and mention a specific reason for connecting based on their role and company. User Prompt Variables: | Variable | Source | |----------|--------| | Name | $json.fullName | | Title | $json.title | | Company | $json.companyName | | Industry | $json.industry | | Job Context | $json.jobTitle | Function: Generates unique, contextual outreach messages that reference specific hiring activity and company details. --- ๐Ÿ”— Merge Lead + Message | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Combines lead data with generated message | | Mode | Run once for each item | Function: Merges OpenAI response with lead profile, creating the final enriched record. --- ๐Ÿ’พ Save Leads to Sheet | Property | Value | |----------|-------| | Type | Google Sheets Node | | Purpose | Stores final lead data with personalized messages | | Operation | Append rows | | Target Sheet | "Leads" | Data Mapping: | Column | Data | |--------|------| | First Name | Lead's first name | | Last Name | Lead's last name | | Title | Job title | | Company | Company name | | LinkedIn URL | Profile link | | Country | Location | | Industry | Business sector | | Date Added | Timestamp | | Source | "Indeed + Apollo" | | Personalized Message | AI-generated outreach text | Function: Creates actionable lead database ready for outreach campaigns. --- Workflow Flow โฐ Schedule Trigger โ”‚ โ–ผ ๐Ÿ” Scrape.do Indeed API โ”€โ”€โ–บ Fetches job listings with JS rendering โ”‚ โ–ผ ๐Ÿ“‹ Parse Indeed Jobs โ”€โ”€โ–บ Extracts company names, job details โ”‚ โ–ผ ๐Ÿ“Š Add New Company โ”€โ”€โ–บ Saves to Google Sheets (Companies) โ”‚ โ–ผ ๐Ÿข Apollo Org Search โ”€โ”€โ–บ Enriches company data โ”‚ โ–ผ ๐Ÿ“ค Extract Apollo Org Data โ”€โ”€โ–บ Parses API response โ”‚ โ–ผ ๐Ÿ‘ฅ Apollo People Search โ”€โ”€โ–บ Finds decision-makers โ”‚ โ–ผ ๐Ÿ“ Format Leads โ”€โ”€โ–บ Structures lead profiles โ”‚ โ–ผ ๐Ÿค– Generate Personalized Message โ”€โ”€โ–บ AI creates custom outreach โ”‚ โ–ผ ๐Ÿ”— Merge Lead + Message โ”€โ”€โ–บ Combines all data โ”‚ โ–ผ ๐Ÿ’พ Save Leads to Sheet โ”€โ”€โ–บ Final storage (Leads) --- Configuration Requirements API Keys & Credentials | Credential | Purpose | Where to Get | |------------|---------|--------------| | Scrape.do API Token | Web scraping with anti-bot bypass | scrape.do/dashboard | | Apollo.io API Key | B2B data enrichment | apollo.io/settings/integrations | | OpenAI API Key | AI message generation | platform.openai.com | | Google Sheets OAuth2 | Data storage | n8n Credentials Setup | n8n Credential Setup | Credential Type | Configuration | |-----------------|---------------| | HTTP Header Auth (Apollo) | Header: x-api-key, Value: Your Apollo API key | | OpenAI API | API Key: Your OpenAI API key | | Google Sheets OAuth2 | Complete OAuth flow with Google | --- Key Features ๐Ÿ” Intelligent Job Scraping Anti-Bot Bypass: Residential proxy rotation via Scrape.do JavaScript Rendering: Full headless browser for dynamic content Mobile Optimization: Cleaner HTML with mobile viewport Markdown Output: Lightweight, easy-to-parse format ๐Ÿข B2B Data Enrichment Company Intelligence: Industry, size, location, LinkedIn Decision-Maker Discovery: Title-based filtering Contact Information: Email, phone, LinkedIn profiles Real-Time Data: Fresh information from Apollo.io ๐Ÿค– AI-Powered Personalization Contextual Messages: References specific hiring activity Character Limit: Optimized for LinkedIn (300 chars) Variable Temperature: Balanced creativity and consistency Role-Specific: Tailored to recipient's title and company ๐Ÿ“Š Automated Data Management Dual Sheet Storage: Companies + Leads separation Timestamp Tracking: Historical records Deduplication: Prevents duplicate entries Ready for Export: CSV-compatible format --- Use Cases ๐ŸŽฏ Sales Prospecting Identify companies actively hiring in your target market Find decision-makers at companies investing in growth Generate personalized cold outreach at scale Track pipeline from discovery to contact ๐Ÿ‘ฅ Recruiting & Talent Acquisition Monitor competitor hiring patterns Identify companies building specific teams Connect with hiring managers directly Build talent pipeline relationships ๐Ÿ“ˆ Market Intelligence Track industry hiring trends Monitor competitor expansion signals Identify emerging market opportunities Benchmark salary ranges by role ๐Ÿค Partnership Development Find companies investing in complementary areas Identify potential integration partners Connect with technical leadership Build strategic relationship pipeline --- Technical Notes | Specification | Value | |---------------|-------| | Processing Time | 2-5 minutes per run (depending on job count) | | Jobs per Run | ~25 unique companies | | API Calls per Run | 1 Scrape.do + ~25 Apollo Org + ~25 Apollo People + ~75 OpenAI | | Data Accuracy | 90%+ for company matching | | Success Rate | 99%+ with proper error handling | Rate Limits to Consider | Service | Free Tier Limit | Recommendation | |---------|-----------------|----------------| | Scrape.do | 1,000 credits/month | ~40 runs/month | | Apollo.io | 100 requests/day | Add Wait nodes if needed | | OpenAI | Based on usage | Monitor costs (~$0.01-0.05/run) | | Google Sheets | 300 requests/minute | No issues expected | --- Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration In n8n: Workflows โ†’ Import from JSON Paste configuration and save Step 2: Configure Scrape.do Sign up at scrape.do Navigate to Dashboard โ†’ API Token Copy your token Token is embedded in URL query parameter (already configured) To customize search: Change the url parameter in "Scrape.do Indeed API" node: q=data+engineer (search term) l=Remote (location) fromage=7 (last 7 days) Step 3: Configure Apollo.io Sign up at apollo.io Go to Settings โ†’ Integrations โ†’ API Keys Create new API key In n8n: Credentials โ†’ Add Credential โ†’ Header Auth Name: x-api-key Value: Your Apollo API key Select this credential in both Apollo HTTP nodes Step 4: Configure OpenAI Go to platform.openai.com Create new API key In n8n: Credentials โ†’ Add Credential โ†’ OpenAI Paste API key Select credential in "Generate Personalized Message" node Step 5: Configure Google Sheets Create new Google Spreadsheet Create two sheets: Sheet 1: "Add New Company" Columns: companyName | jobTitle | jobUrl | location | salary | source | postedDate Sheet 2: "Leads" Columns: First Name | Last Name | Title | Company | LinkedIn URL | Country | Industry | Date Added | Source | Personalized Message Copy Sheet ID from URL In n8n: Credentials โ†’ Add Credential โ†’ Google Sheets OAuth2 Update both Google Sheets nodes with your Sheet ID Step 6: Test and Activate Manual Test: Click "Execute Workflow" button Verify Each Node: Check outputs step by step Review Data: Confirm data appears in Google Sheets Activate: Toggle workflow to "Active" --- Error Handling Common Issues | Issue | Cause | Solution | |-------|-------|----------| | "Invalid character: " | Empty/malformed company name | Check Parse Indeed Jobs output | | "Node does not have credentials" | Credential not linked | Open node โ†’ Select credential | | Empty Parse Results | Indeed HTML structure changed | Check Scrape.do raw output | | Apollo Rate Limit (429) | Too many requests | Add 5-10s Wait node between calls | | OpenAI Timeout | Too many tokens | Reduce batch size or max_tokens | | "Your request is invalid" | Malformed JSON body | Verify expression syntax in HTTP nodes | Troubleshooting Steps Verify Credentials: Test each credential individually Check Node Outputs: Use "Execute Node" for debugging Monitor API Usage: Check Apollo and OpenAI dashboards Review Logs: Check n8n execution history for details Test with Sample: Use known company name to verify Apollo Recommended Error Handling Additions For production use, consider adding: IF node after Apollo Org Search to handle empty results Error Workflow trigger for notifications Wait nodes between API calls for rate limiting Retry logic for transient failures --- Performance Specifications | Metric | Value | |--------|-------| | Execution Time | 2-5 minutes per scheduled run | | Jobs Discovered | ~25 per Indeed page | | Leads Generated | 1-3 per company (based on title matches) | | Message Quality | Professional, contextual, <300 chars | | Data Freshness | Real-time from Indeed + Apollo | | Storage Format | Google Sheets (unlimited rows) | --- API Reference Scrape.do API | Endpoint | Method | Purpose | |----------|--------|---------| | https://api.scrape.do | GET | Direct URL scraping | Documentation: [scrape.do/documentation Apollo.io API | Endpoint | Method | Purpose | |----------|--------|---------| | /v1/organizations/search | POST | Company lookup | | /v1/mixed_people/search | POST | People search | Documentation: apolloio.github.io/apollo-api-docs OpenAI API | Endpoint | Method | Purpose | |----------|--------|---------| | /v1/chat/completions | POST | Message generation | Documentation: [platform.openai.com

OnurBy Onur
37

Track GitHub trending repositories with ScrapeOps & Google Sheets

Overview This n8n template tracks GitHub Trending repositories (daily/weekly/monthly), parses the trending page into structured data (rank, repo name, stars, language, etc.), and stores results in Google Sheets with automatic deduping. Itโ€™s designed for teams who want a simple โ€œtrending feedโ€ for engineering research, developer tooling discovery, and weekly reporting. Who is this for? Developers, PMs, DevRel, and tooling teams who want a lightweight trend radar Anyone building a curated list of fast-rising open source projects Teams who want Sheets-based tracking without manual copy/paste What problems it solves Automatically collects GitHub Trending data on a schedule Prevents duplicate rows using a stable dedupe_key Updates existing rows when values change (rank/stars/score) How it works A schedule triggers the workflow. Inputs define the trending window (daily, weekly, or monthly) and optional languages. ScrapeOps fetches the GitHub Trending HTML reliably. The workflow parses repositories and ranks from the HTML. Cleaned rows are written to Google Sheets using Append or Update Row matching on dedupe_key. Setup steps (~5โ€“10 minutes) 1) ScrapeOps Register & get an API key: https://scrapeops.io/app/register/n8n Read the n8n overview: https://scrapeops.io/docs/n8n/overview/ (Optional) Learn ScrapeOps Proxy API features: https://scrapeops.io/docs/n8n/proxy-api/ 2) Google Sheets Duplicate this sheet/create a Sheet and add a trending_raw tab. Add columns used by the workflow (e.g. capturedat, since, sourceurl, rankonpage, fullname, repourl, starstotal, forkstotal, starsinperiod, score, dedupe_key). In the Google Sheets node, choose Append or Update Row and set Column to match on = dedupe_key. 3) Customize Change since to daily/weekly/monthly in the Inputs node. Add languages via languages_csv (example: any,python,go,rust). Adjust delay if needed. Pre-conditions ScrapeOps account + API key configured in n8n Google Sheets credentials connected in n8n A Sheet tab named trending_raw with matching columns Disclaimer This template uses ScrapeOps as a community node. You are responsible for complying with GitHubโ€™s Terms of Service, robots directives, and applicable laws in your jurisdiction. Scraping targets can change at any time; you may need to update wait times and parsing logic accordingly. Use responsibly for legitimate business purposes.

Ian KerinsBy Ian Kerins
10
All templates loaded