Back to Catalog
Onur

Onur

Hello, I'm Onur I've been working as a freelance software developer for about four years. In addition, I develop my own projects. For some time, I have been improving myself and providing various services related to AI and AI workflows. Both by writing low code and code. If you have any questions, don't hesitate to contact me.

Total Views62,134
Templates9

Templates by Onur

Automated AI content creation & Instagram publishing from Google Sheets

Automated AI Content Creation & Instagram Publishing from Google Sheets This n8n workflow automates the creation and publishing of social media content directly to Instagram, using ideas stored in a Google Sheet. It leverages AI (Google Gemini and Replicate Flux) to generate concepts, image prompts, captions, and the final image, turning your content plan into reality with minimal manual intervention. Think of this as the execution engine for your content strategy. It assumes you have a separate process (whether manual entry, another workflow, or a different tool) for populating the Google Sheet with initial post ideas (including Topic, Audience, Voice, and Platform). This workflow takes those ideas and handles the rest, from AI generation to final publication. What does this workflow do? This workflow streamlines the content execution process by: Automatically fetching unprocessed content ideas from a designated Google Sheet based on a schedule. Using Google Gemini to generate a platform-specific content concept (specifically for a 'Single Image' format). Generating two distinct AI image prompt options based on the concept using Gemini. Writing an engaging, platform-tailored caption (including hashtags) using Gemini, based on the first* prompt option. Generating a visual image using the first* prompt option via the Replicate API (using the Flux model). Publishing the generated image and caption directly to a connected Instagram Business account. Updating the status in the Google Sheet to mark the idea as completed, preventing reprocessing. Who is this for? Social Media Managers & Agencies: Automate the execution of your content calendar stored in Google Sheets. Marketing Teams: Streamline content production from planned ideas and ensure consistent posting schedules. Content Creators & Solopreneurs: Save significant time by automating the generation and publishing process based on your pre-defined ideas. Anyone using Google Sheets to plan social media content and wanting to automate the creative generation and posting steps with AI. Benefits Full Automation: From fetching planned ideas to Instagram publishing, automate the entire content execution pipeline. AI-Powered Generation: Leverage Google Gemini for creative concepts, prompts, and captions, and Replicate for image generation based on your initial topic. Content Calendar Execution: Directly turn your Google Sheet plan into published posts. Time Savings: Drastically reduce the manual effort involved in creating visuals and text for each planned post. Consistency: Maintain a regular posting schedule by automatically processing your queue of ideas. Platform-Specific Content: AI prompts are designed to tailor concepts, prompts, and captions for the platform specified in your sheet (e.g., Instagram or LinkedIn). How it Works Scheduled Trigger: The workflow starts automatically based on the schedule you set (e.g., every hour, daily). Fetch Idea: Reads the next row from your Google Sheet where the 'Status' column indicates it's pending (e.g., '0'). It only fetches one idea per run. Prepare Inputs: Extracts Topic, Audience, Voice, and Platform from the sheet data. AI Concept Generation (Gemini): Creates a single content concept suitable for a 'Single Image' post on the target platform. AI Prompt Generation (Gemini): Develops two detailed, distinct image prompt options based on the concept. AI Caption Generation (Gemini): Writes a caption tailored to the platform, using the first image prompt and other context. Image Generation (Replicate): Sends the first prompt to the Replicate API (Flux model) to generate the image. Prepare for Instagram: Formats the generated image URL and caption. Publish to Instagram: Uses the Facebook Graph API in three steps: Creates a media container by uploading the image URL and caption. Waits for Instagram to process the container. Publishes the processed container to your feed. Update Sheet: Changes the 'Status' in the Google Sheet for the processed row (e.g., to '1') to mark it as complete. n8n Nodes Used Schedule Trigger Google Sheets (Read & Update operations) Set (Multiple instances for data preparation) Langchain Chain - LLM (Multiple instances for Gemini calls) Langchain Chat Model - Google Gemini (Multiple instances) Langchain Output Parser - Structured (Multiple instances) HTTP Request (for Replicate API call) Wait Facebook Graph API (Multiple instances for Instagram publishing steps) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account with access to Google Sheets. Google Sheets API Credentials (OAuth2): Configured in n8n. A Google Sheet structured with columns like Topic, Audience, Voice, Platform, Status (or similar). Ensure your 'pending' and 'completed' statuses are defined (e.g., '0' and '1'). Google Cloud Project with the Vertex AI API enabled. Google Gemini API Credentials: Configured in n8n (usually via Google Vertex AI credentials). Replicate Account and API Token. Replicate API Credentials (Header Auth): Configured in n8n. Facebook Developer Account. Instagram Business Account connected to a Facebook Page. Facebook App with necessary permissions: instagrambasic, instagramcontentpublish, pagesreadengagement, pagesshow_list. Facebook Graph API Credentials (OAuth2): Configured in n8n with the required permissions. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired frequency (e.g., every 30 minutes, every 4 hours) for checking new ideas in the sheet. Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for both Google Sheets nodes. In 1. Get Next Post Idea..., enter your Spreadsheet ID and Sheet Name. Verify the Status filter matches your 'pending' value (e.g., 0). In 7. Update Post Status..., enter the same Spreadsheet ID and Sheet Name. Ensure the Matching Columns (e.g., Topic) and the Status value to update match your 'completed' value (e.g., 1). Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in all Google Gemini Chat Model nodes. Configure Replicate Node (4. Generate Image...): Select your Replicate Header Auth credentials. The workflow uses black-forest-labs/flux-1.1-pro-ultra by default; you can change this if needed. Configure Facebook Graph API Nodes (6a, 6c): Select your Facebook Graph API OAuth2 credentials. Crucially, update the Instagram Account ID in the Node parameter of both Facebook Graph API nodes (6a and 6c). The template uses a placeholder (17841473009917118); replace this with your actual Instagram Business Account ID. Adjust Wait Node (6b): The default wait time might be sufficient, but if you encounter errors during publishing (especially with larger images/videos in the future), you might need to increase the wait duration. Activate the workflow. Populate your Google Sheet: Ensure you have rows with your content ideas and the correct 'pending' status (e.g., '0'). The workflow will pick them up on its next scheduled run. This workflow transforms your Google Sheet content plan into a fully automated AI-powered Instagram publishing engine. Start automating your social media presence today!

OnurBy Onur
44902

Turn BBC News articles into podcasts using Hugging Face and Google Gemini

Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators who want to quickly produce podcasts from current events. Students looking for an efficient way to create audio content for projects or assignments. Individuals interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving: Create podcasts in minutes, not hours. Easy to use: No coding or technical skills required. Customizable: Adapt the workflow to your specific needs and preferences. Cost-effective: Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.

OnurBy Onur
5241

Automated daily customer win-back campaign with AI offers

Proactively retain customers predicted to churn with this automated n8n workflow. Running daily, it identifies high-risk customers from your Google Sheet, uses Google Gemini to generate personalized win-back offers based on their churn score and preferences, sends these offers via Gmail, and logs all actions for tracking. What does this workflow do? This workflow automates the critical process of customer retention by: Running automatically every day on a schedule you define. Fetching customer data from a designated Google Sheet containing metrics like predicted churn scores and preferred categories. Filtering to identify customers with a high churn risk (score > 0.7) who haven't recently received a specific campaign (based on the created_campaign_date field - you might need to adjust this logic*). Using Google Gemini AI to dynamically generate one of three types of win-back offers, personalized based on the customer's specific churn score and preferred product categories: Informational: (Score 0.7-0.8) Highlights new items in preferred categories. Bonus Points: (Score 0.8-0.9) Offers points for purchases in a target category (e.g., Books). Discount Percentage: (Score 0.9-1.0) Offers a percentage discount in a target category (e.g., Books). Sending the personalized offer directly to the customer via Gmail. Logging each sent offer or the absence of eligible customers for the day in a separate 'SYSTEM_LOG' Google Sheet for monitoring and analysis. Who is this for? CRM Managers & Retention Specialists: Automate personalized outreach to at-risk customers. Marketing Teams: Implement data-driven retention campaigns with minimal manual effort. E-commerce Businesses & Subscription Services: Proactively reduce churn and increase customer lifetime value. Anyone using customer data (especially churn prediction scores) who wants to automate personalized retention efforts via email. Benefits Automated Retention: Set it up once, and it runs daily to engage at-risk customers automatically. AI-Powered Personalization: Go beyond generic offers; tailor messages based on churn risk and customer preferences using Gemini. Proactive Churn Reduction: Intervene before* customers leave by addressing high churn scores with relevant offers. Scalability: Handle personalized outreach for many customers without manual intervention. Improved Customer Loyalty: Show customers you value them with relevant, timely offers. Action Logging: Keep track of which customers received offers and when the workflow ran. How it Works Daily Trigger: The workflow starts automatically based on the schedule set (e.g., daily at 9 AM). Fetch Data: Reads all customer data from your 'Customer Data' Google Sheet. Filter Customers: Selects customers where predictedchurnscore > 0.7 AND createdcampaigndate is empty (verify this condition fits your needs). Check for Eligibility: Determines if any customers passed the filter. IF Eligible Customers Found: Loop: Processes each eligible customer one by one. Generate Offer (Gemini): Sends the customer's predictedchurnscore and preferred_categories to Gemini. Gemini analyzes these and the defined rules to create the appropriate offer type, value, title, and detailed message, returning it as structured JSON. Log Sent Offer: Records actiontaken = SENTWINBACKOFFER, the timestamp, and customerid in the 'SYSTEM_LOG' sheet. Send Email: Uses the Gmail node to send an email to the customer's usermail with the generated offertitle as the subject and offer_details as the body. IF No Eligible Customers Found: Set Status: Creates a record indicating systemlog = NOTFOUND. Log Status: Records this 'NOTFOUND' status and the current timestamp in the 'SYSTEMLOG' sheet. n8n Nodes Used Schedule Trigger Google Sheets (x3 - Read Customers, Log Sent Offer, Log Not Found) Filter If SplitInBatches (Used for Looping) Langchain Chain - LLM (Gemini Offer Generation) Langchain Chat Model - Google Gemini Langchain Output Parser - Structured Set (Prepare 'Not Found' Log) Gmail (Send Offer Email) Prerequisites Active n8n instance (Cloud or Self-Hosted). Google Account with access to Google Sheets and Gmail. Google Sheets API Credentials (OAuth2): Configured in n8n. Two Google Sheets: 'Customer Data' Sheet: Must contain columns like customerid, predictedchurnscore (numeric), preferredcategories (string, e.g., ["Books", "Electronics"]), usermail (string), and potentially createdcampaign_date (date/string). 'SYSTEMLOG' Sheet: Should have columns like systemlog (string), date (string/timestamp), and customerid (string, optional for 'NOTFOUND' logs). Google Cloud Project with the Vertex AI API enabled. Google Gemini API Credentials: Configured in n8n (usually via Google Vertex AI credentials). Gmail API Credentials (OAuth2): Configured in n8n with permission to send emails. Setup Import the workflow JSON into your n8n instance. Configure Schedule Trigger: Set the desired daily run time (e.g., Hours set to 9). Configure Google Sheets Nodes: Select your Google Sheets OAuth2 credentials for all three Google Sheets nodes. Fetch Customer Data...: Enter your 'Customer Data' Spreadsheet ID and Sheet Name. 5b. Log Sent Offer...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. 3b. Log 'Not Found'...: Enter your 'SYSTEM_LOG' Spreadsheet ID and Sheet Name. Verify column mapping. Configure Filter Node (2. Filter High Churn Risk...): Crucially, review the second condition: {{ $json.created_campaign_date.isEmpty() }}. Ensure this field and logic correctly identify customers who should* receive the offer based on your campaign strategy. Modify or remove if necessary. Configure Google Gemini Nodes: Select your configured Google Vertex AI / Gemini credentials in the Google Gemini Chat Model node. Review the prompt in the 5a. Generate Win-Back Offer... node to ensure the offer logic matches your business rules (especially category names like "Books"). Configure Gmail Node (5c. Send Win-Back Offer...): Select your Gmail OAuth2 credentials. Activate the workflow. Ensure your 'Customer Data' and 'SYSTEM_LOG' Google Sheets are correctly set up and populated. The workflow will run automatically at the next scheduled time. This workflow provides a powerful, automated way to engage customers showing signs of churn, using personalized AI-driven offers to encourage them to stay. Adapt the filtering and offer logic to perfectly match your business needs!

OnurBy Onur
1082

LinkedIn content factory with OpenAI research & Replicate branded images

Template Description: > Never run out of high-quality LinkedIn content again. This workflow is a complete content factory that takes a simple topic from a Google Sheet, uses AI to research a trending angle, writes a full post, generates a unique and on-brand image, and publishes it directly to your LinkedIn profile. This template is designed for brands and creators who want to maintain a consistent, high-quality social media presence with minimal effort. The core feature is its ability to generate visuals that adhere to a specific, customizable brand style guide. --- πŸš€ What does this workflow do? Pulls content ideas from a Google Sheet acting as your content calendar. Uses an AI Researcher (OpenAI + SerpAPI) to find the most recent and engaging news or trends related to your topic. Employs an AI Writer to draft a complete, professional LinkedIn post with a catchy title, engaging text, and relevant hashtags. Generates a unique, on-brand image for every post using Replicate, based on a customizable style guide (colors, composition, mood) defined within the workflow. Publishes the post with its image directly to your LinkedIn profile. Updates the status in your Google Sheet to "done" to avoid duplicate posts. 🎯 Who is this for? Marketing Teams: Automate your content calendar and ensure brand consistency across all visuals. Social Media Managers: Save hours of research, writing, and design work. Solopreneurs & Founders: Maintain an active, professional LinkedIn presence without a dedicated content team. Content Creators: Scale your content production and focus on strategy instead of execution. ✨ Benefits End-to-End Automation: From a single keyword to a published post, the entire process is automated. Brand Consistency: The AI image generator follows a strict, customizable style guide, ensuring all your visuals feel like they belong to your brand. Always Relevant Content: The AI research step ensures your posts are based on current trends and news, increasing engagement. Massive Time Savings: Automates research, copywriting, and graphic design in one seamless flow. Content Calendar Integration: Easily manage your content pipeline using a simple Google Sheet. βš™οΈ How it Works Get Topic: The workflow fetches the next "Pending" topic from your Google Sheet. AI Research: An AI Agent uses SerpAPI to research the topic and identify a viral angle. AI Writing: A second AI Agent takes the research and writes the full LinkedIn post. Generate Image Prompt: A Code node constructs a detailed prompt, merging the post's content with your defined brand style guide. Generate Image: The prompt is sent to Replicate. The workflow waits and checks periodically until the image is ready. Publish: The generated text and image are published to your LinkedIn account. Update Status: The workflow archives the image to Google Drive and updates the topic's status in your Google Sheet to "done". πŸ“‹ n8n Nodes Used Google Sheets Langchain Agent (with OpenAI & SerpAPI) Code HTTP Request Wait / If LinkedIn Google Drive πŸ”‘ Prerequisites An active n8n instance. Google Account with Sheets & Drive access (OAuth2 Credentials). OpenAI Account & API Key. SerpAPI Account & API Key (for the research tool). Replicate Account & API Token. LinkedIn Account (OAuth2 Credentials). A Google Sheet with "Topic" and "Status" columns. πŸ› οΈ Setup Import the workflow into your n8n instance. Configure All Credentials: Go through the workflow and connect your credentials for Google, OpenAI, SerpAPI, Replicate, and LinkedIn in their respective nodes. Link Your Google Sheet: In the 1. Get Pending Topic... node, select your spreadsheet and sheet. Do the same for the final 8. ...Update Status node. Customize Your Brand Style (Highly Recommended): In the 4. Generate Branded Image Prompt (Code) node, edit the fixedImageStyleDetails variable. Change the RAL color codes and descriptive words to match your brand's visual identity. Populate Your Content Calendar: Add topics to your Google Sheet and set their status to "Pending". Activate the workflow!

OnurBy Onur
1042

Extract Amazon product data with Scrape.do, GPT-4 & Google Sheets

Amazon Product Scraper with Scrape.do & AI Enrichment > This workflow is a fully automated Amazon product data extraction engine. It reads product URLs from a Google Sheet, uses Scrape.do to reliably fetch each product page’s HTML without getting blocked, and then applies an AI-powered extraction process to capture key product details such as name, price, rating, review count, and description. All structured results are neatly stored back into a Google Sheet for easy access and analysis. This template is designed for consistency and scalabilityβ€”ideal for marketers, analysts, and e-commerce professionals who need clean product data at scale. --- πŸš€ What does this workflow do? Reads Input URLs: Pulls a list of Amazon product URLs from a Google Sheet. Scrapes HTML Reliably: Uses Scrape.do to bypass Amazon’s anti-bot measures, ensuring the page HTML is always retrieved successfully. Cleans & Pre-processes HTML: Strips scripts, styles, and unnecessary markup, isolating only relevant sections like title, price, ratings, and feature bullets. AI-Powered Data Extraction: A LangChain/OpenRouter GPT-4 node verifies and enriches key fieldsβ€”product name, price, rating, reviews, and description. Stores Structured Results: Appends all extracted and verified product data to a results tab in Google Sheets. Batch & Loop Control: Handles multiple URLs efficiently with Split In Batches to process as many products as you need. 🎯 Who is this for? E-commerce Sellers & Dropshippers: Track competitor prices, ratings, and key product features automatically. Marketing & SEO Teams: Collect product descriptions and reviews to optimize campaigns and content. Analysts & Data Teams: Build accurate product databases without manual copy-paste work. ✨ Benefits High Success Rate: Scrape.do handles proxy rotation and CAPTCHA challenges automatically, outperforming traditional scrapers. AI Validation: LLM verification ensures data accuracy and fills in gaps when HTML elements vary. Full Automation: Runs on-demand or on a schedule to keep product datasets fresh. Clean Output: Results are neatly organized in Google Sheets, ready for reporting or integration with other tools. βš™οΈ How it Works Manual or Scheduled Trigger: Start the workflow manually or via a cron schedule. Input Source: Fetch URLs from a Google Sheet (TRACKSHEETGID). Scrape with Scrape.do: Retrieve full HTML from each Amazon product page using your SCRAPEDO_TOKEN. Clean & Pre-Extract: Strip irrelevant code and use regex to pre-extract key fields. AI Extraction & Verification: LangChain GPT-4 model refines and validates product name, description, price, rating, and reviews. Save Results: Append enriched product data to the results sheet (RESULTSSHEETGID). πŸ“‹ n8n Nodes Used Manual Trigger / Schedule Trigger Google Sheets (read & append) Split In Batches HTTP Request (Scrape.do) Code (clean & pre-extract HTML) LangChain LLM (OpenRouter GPT-4) Structured Output Parser πŸ”‘ Prerequisites Active n8n instance. Scrape.do API token (bypasses Amazon anti-bot measures). Google Sheets with: TRACKSHEETGID: tab containing product URLs. RESULTSSHEETGID: tab for results. Google Sheets OAuth2 credentials shared with your service account. OpenRouter / OpenAI API credentials for the GPT-4 model. πŸ› οΈ Setup Import the Workflow into your n8n instance. Set Workflow Variables: SCRAPEDO_TOKEN – your Scrape.do API key. WEBSHEETID – Google Sheet ID. TRACKSHEETGID – sheet/tab name for input URLs. RESULTSSHEETGID – sheet/tab name for results. Configure Credentials for Google Sheets and OpenRouter. Map Columns in the β€œadd results” node to match your Google Sheet (e.g., name, price, rating, reviews, description). Run or Schedule: Start manually or configure a schedule for continuous data extraction. --- This Amazon Product Scraper delivers fast, reliable, and AI-enriched product data, ensuring your e-commerce analytics, pricing strategies, or market research stay accurate and fully automated.

OnurBy Onur
675

Automate applicant tracking with GPT-4.1 CV parsing, Google Sheets and Gmail alerts

Template Description: > Stop manually reading every CV and copy-pasting data into a spreadsheet. This workflow acts as an AI recruiting assistant, automating your entire initial screening process. It captures applications from a public form, uses AI to read and understand PDF CVs, structures the candidate data, saves it to Google Sheets, and notifies all parties. This template is designed to save HR professionals and small business owners countless hours, ensuring no applicant is missed and all data is consistently structured and stored. --- πŸš€ What does this workflow do? Provides a public web form for candidates to submit their name, email, experience, and PDF CV. Automatically reads the text content from the uploaded PDF CV. Uses an AI Agent (OpenAI) to intelligently parse the CV text, extracting key data like contact info, work experience, education, skills, and more. Writes a concise summary of the CV, perfect for quick screening by HR. Checks for duplicate applications based on the candidate's email address. Saves all structured applicant data into a new row in a Google Sheet, creating a powerful candidate database. Sends an automated confirmation email to the applicant. Sends a new application alert with the CV summary to the recruiter. 🎯 Who is this for? HR Departments & Recruiters: Streamline your hiring pipeline and build a structured candidate database. Small Business Owners: Manage job applications professionally without dedicated HR software. Hiring Managers: Quickly get a summarized overview of each candidate without reading the full CV initially. ✨ Benefits Massive Time Savings: Drastically reduces the time spent on manual CV screening and data entry. Structured Candidate Data: Turns every CV into a consistently formatted row in a spreadsheet, making it easy to compare candidates. Never Miss an Applicant: Every submission is logged, and you're instantly notified. Improved Candidate Experience: Applicants receive an immediate confirmation that their submission was successful. AI-Powered Summaries: Get a quick, AI-generated summary of each CV delivered to your inbox. βš™οΈ How it Works Form Submission: A candidate fills out the n8n form and uploads their CV. PDF Extraction: The workflow extracts the raw text from the PDF file. AI Analysis: The text is sent to OpenAI with a prompt to structure all key information (experience, skills, etc.) into a JSON format. Duplicate Check: The workflow checks your Google Sheet to see if the applicant's email already exists. If so, it stops. Save to Database: If the applicant is new, their structured data is saved as a new row in Google Sheets. Send Notifications: Two emails are sent simultaneously: a confirmation to the applicant and a notification with the CV summary to the recruiter. πŸ“‹ n8n Nodes Used Form Trigger Extract From File OpenAI Code (or JSON Parser) Google Sheets If Gmail πŸ”‘ Prerequisites An active n8n instance. OpenAI Account & API Key. Google Account with access to Google Sheets and Gmail (OAuth2 Credentials). A Google Sheet prepared with columns to store the applicant data (e.g., name, email, experience, skills, cv_summary, etc.). πŸ› οΈ Setup Import the workflow into your n8n instance. Configure Credentials: Connect your credentials for OpenAI and Google (for Sheets & Gmail) in their respective nodes. Customize the Form: In the 1. Applicant Submits Form node, you can add or remove fields as needed. Activate the workflow. Once active, copy the Production URL from the Form Trigger node and share it to receive applications. Set Your Email: In the 8b. Send Notification... (Gmail) node, change the "To" address to your own email address to receive alerts. Link Your Google Sheet: In the 5. Check for Duplicate... and 7. Save Applicant Data... nodes, select your spreadsheet and sheet.

OnurBy Onur
228

Extract Zillow property data to Google Sheets with Scrape.do

🏠 Extract Zillow Property Data to Google Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts property listing data from Zillow URLs using Scrape.do web scraping API, parses key property information, and saves structured results into Google Sheets for real estate analysis, market research, and property tracking. πŸ“‹ Overview This workflow provides a lightweight real estate data extraction solution that pulls property details from Zillow listings and organizes them into a structured spreadsheet. Ideal for real estate professionals, investors, market analysts, and property managers who need automated property data collection without manual effort. Who is this for? Real estate investors tracking properties Market analysts conducting property research Real estate agents monitoring listings Property managers organizing data Data analysts building real estate databases What problem does this workflow solve? Eliminates manual copy-paste from Zillow Processes multiple property URLs in bulk Extracts structured data (price, address, zestimate, etc.) Automates saving results into Google Sheets Ensures repeatable & consistent data collection βš™οΈ What this workflow does Manual Trigger β†’ Starts the workflow manually Read Zillow URLs from Google Sheets β†’ Reads property URLs from a Google Sheet Scrape Zillow URL via Scrape.do β†’ Fetches full HTML from Zillow (bypasses PerimeterX protection) Parse Zillow Data β†’ Extracts structured property information from HTML Write Results to Google Sheets β†’ Saves parsed data into a results sheet πŸ“Š Output Data Points | Field | Description | Example | |-------|-------------|---------| | URL | Original Zillow listing URL | https://www.zillow.com/homedetails/... | | Price | Property listing price | $300,000 | | Address | Street address | 8926 Silver City | | City | City name | San Antonio | | State | State abbreviation | TX | | Days on Zillow | How long listed | 5 | | Zestimate | Zillow's estimated value | $297,800 | | Scraped At | Timestamp of extraction | 2025-01-29T12:00:00.000Z | βš™οΈ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with API token (Get 1000 free credits/month) Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Sheet1" | Column | Type | Description | Example | |--------|------|-------------|---------| | URLs | URL | Zillow listing URL | https://www.zillow.com/homedetails/123... | Output Tab: "Results" | Column | Type | Description | Example | |--------|------|-------------|---------| | URL | URL | Original listing URL | https://www.zillow.com/homedetails/... | | Price | Text | Property price | $300,000 | | Address | Text | Street address | 8926 Silver City | | City | Text | City name | San Antonio | | State | Text | State code | TX | | Days on Zillow | Number | Days listed | 5 | | Zestimate | Text | Estimated value | $297,800 | | Scraped At | Timestamp | When scraped | 2025-01-29T12:00:00.000Z | πŸ›  Step-by-Step Setup Import Workflow: Copy the JSON β†’ n8n β†’ Workflows β†’ + Add β†’ Import from JSON Configure Scrape.do API: Sign up at Scrape.do Dashboard Get your API token In HTTP Request node, replace YOURSCRAPEDO_TOKEN with your actual token The workflow uses super=true for premium residential proxies (10 credits per request) Configure Google Sheets: Create a new Google Sheet Add two tabs: "Sheet1" (input) and "Results" (output) In Sheet1, add header "URLs" in cell A1 Add Zillow URLs starting from A2 Set up Google Sheets OAuth2 credentials in n8n Replace YOURSPREADSHEETID with your actual Google Sheet ID Replace YOURGOOGLESHEETSCREDENTIALID with your credential ID Run & Test: Add 1-2 test Zillow URLs in Sheet1 Click "Execute workflow" Check results in Results tab 🧰 How to Customize Add more fields: Extend parsing logic in "Parse Zillow Data" node to capture additional data (bedrooms, bathrooms, square footage) Filtering: Add conditions to skip certain properties or price ranges Rate Limiting: Insert a Wait node between requests if processing many URLs Error Handling: Add error branches to handle failed scrapes gracefully Scheduling: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs πŸ“Š Use Cases Investment Analysis: Track property prices and zestimates over time Market Research: Analyze listing trends in specific neighborhoods Portfolio Management: Monitor properties for sale in target areas Competitive Analysis: Compare similar properties across locations Lead Generation: Build databases of properties matching specific criteria πŸ“ˆ Performance & Limits Single Property: ~5-10 seconds per URL Batch of 10: 1-2 minutes typical Large Sets (50+): 5-10 minutes depending on Scrape.do credits API Calls: 1 Scrape.do request per URL (10 credits with super=true) Reliability: 95%+ success rate with premium proxies 🧩 Troubleshooting | Problem | Solution | |---------|----------| | API error 400 | Check your Scrape.do token and credits | | URL showing "undefined" | Verify Google Sheet column name is "URLs" (capital U) | | No data parsed | Check if Zillow changed their HTML structure | | Permission denied | Re-authenticate Google Sheets OAuth2 in n8n | | 50000 character error | Verify Parse Zillow Data code is extracting fields, not returning raw HTML | | Price shows HTML/CSS | Update price extraction regex in Parse Zillow Data node | 🀝 Support & Community Scrape.do Documentation Scrape.do Dashboard Scrape.do Zillow Scraping Guide n8n Forum n8n Docs 🎯 Final Notes This workflow provides a repeatable foundation for extracting Zillow property data with Scrape.do and saving to Google Sheets. You can extend it with: Historical tracking (append timestamps) Price change alerts (compare with previous scrapes) Multi-platform scraping (Redfin, Realtor.com) Integration with CRM or reporting dashboards Important: Scrape.do handles all anti-bot bypassing (PerimeterX, CAPTCHAs) automatically with rotating residential proxies, so you only pay for successful requests. Always use super=true parameter for Zillow to ensure high success rates.

OnurBy Onur
218

SERP competitor research with Scrape.do API & Google Sheets

πŸ” Extract Competitor SERP Rankings from Google Search to Sheets with Scrape.do This template requires a self-hosted n8n instance to run. A complete n8n automation that extracts competitor data from Google search results for specific keywords and target countries using Scrape.do SERP API, and saves structured results into Google Sheets for SEO, competitive analysis, and market research. --- πŸ“‹ Overview This workflow provides a lightweight competitor analysis solution that identifies ranking websites for chosen keywords across different countries. Ideal for SEO specialists, content strategists, and digital marketers who need structured SERP insights without manual effort. Who is this for? SEO professionals tracking keyword competitors Digital marketers conducting market analysis Content strategists planning based on SERP insights Business analysts researching competitor positioning Agencies automating SEO reporting What problem does this workflow solve? Eliminates manual SERP scraping Processes multiple keywords across countries Extracts structured data (position, title, URL, description) Automates saving results into Google Sheets Ensures repeatable & consistent methodology --- βš™οΈ What this workflow does Manual Trigger β†’ Starts the workflow manually Get Keywords from Sheet β†’ Reads keywords + target countries from a Google Sheet URL Encode Keywords β†’ Converts keywords into URL-safe format Process Keywords in Batches β†’ Handles multiple keywords sequentially to avoid rate limits Fetch Google Search Results β†’ Calls Scrape.do SERP API to retrieve raw HTML of Google SERPs Extract Competitor Data from HTML β†’ Parses HTML into structured competitor data (top 10 results) Append Results to Sheet β†’ Writes structured SERP results into a Google Sheet --- πŸ“Š Output Data Points | Field | Description | Example | |--------------------|------------------------------------------|-------------------------------------------| | Keyword | Original search term | digital marketing services | | Target Country | 2-letter ISO code of target region | US | | position | Ranking position in search results | 1 | | websiteTitle | Page title from SERP result | Digital Marketing Software & Tools | | websiteUrl | Extracted website URL | https://www.hubspot.com/marketing | | websiteDescription | Snippet/description from search results | Grow your business with HubSpot’s tools… | --- βš™οΈ Setup Prerequisites n8n instance (self-hosted) Google account with Sheets access Scrape.do account with SERP API token Google Sheet Structure This workflow uses one Google Sheet with two tabs: Input Tab: "Keywords" | Column | Type | Description | Example | |----------|------|-------------|---------| | Keyword | Text | Search query | digital marketing | | Target Country | Text | 2-letter ISO code | US | Output Tab: "Results" | Column | Type | Description | Example | |--------------------|-------|-------------|---------| | Keyword | Text | Original search term | digital marketing | | position | Number| SERP ranking | 1 | | websiteTitle | Text | Title of the page | Digital Marketing Software & Tools | | websiteUrl | URL | Website/page URL | https://www.hubspot.com/marketing | | websiteDescription | Text | Snippet text | Grow your business with HubSpot’s tools | --- πŸ›  Step-by-Step Setup Import Workflow: Copy the JSON β†’ n8n β†’ Workflows β†’ + Add β†’ Import from JSON Configure Scrape.do API: Endpoint: https://api.scrape.do/ Parameter: token=YOURSCRAPEDOTOKEN Add render=true for full HTML rendering Configure Google Sheets: Create a sheet with two tabs: Keywords (input), Results (output) Set up Google Sheets OAuth2 credentials in n8n Replace placeholders: YOURGOOGLESHEETID and YOURGOOGLESHEETSCREDENTIAL_ID Run & Test: Add test data in Keywords tab Execute workflow β†’ Check results in Results tab --- 🧰 How to Customize Add more fields: Extend HTML parsing logic in the β€œExtract Competitor Data” node to capture extra data (e.g., domain, sitelinks). Filtering: Exclude domains or results with custom rules. Batch Size: Adjust β€œProcess Keywords in Batches” for speed vs. rate-limits. Rate Limiting: Insert a Wait node (e.g., 10–30 seconds) if API rate limits apply. Multi-Sheet Output: Save per-country or per-keyword results into separate tabs. --- πŸ“Š Use Cases SEO Competitor Analysis: Identify top-ranking sites for target keywords Market Research: See how SERPs differ by region Content Strategy: Analyze titles & descriptions of competitor pages Agency Reporting: Automate competitor SERP snapshots for clients --- πŸ“ˆ Performance & Limits Single Keyword: ~10–20 seconds (depends on Scrape.do response) Batch of 10: 3–5 minutes typical Large Sets (50+): 20–40 minutes depending on API credits & batching API Calls: 1 Scrape.do request per keyword Reliability: 95%+ extraction success, 98%+ data accuracy --- 🧩 Troubleshooting API error β†’ Check YOURSCRAPEDOTOKEN and API credits No keywords loaded β†’ Verify Google Sheet ID & tab name = Keywords Permission denied β†’ Re-authenticate Google Sheets OAuth2 in n8n Empty results β†’ Check parsing logic and verify search term validity Workflow stops early β†’ Ensure batching loop (SplitInBatches) is properly connected --- 🀝 Support & Community n8n Forum: https://community.n8n.io n8n Docs: https://docs.n8n.io Scrape.do Dashboard: https://dashboard.scrape.do --- 🎯 Final Notes This workflow provides a repeatable foundation for extracting competitor SERP rankings with Scrape.do and saving them to Google Sheets. You can extend it with filtering, richer parsing, or integration with reporting dashboards to create a fully automated SEO intelligence pipeline.

OnurBy Onur
210

Job post to sales lead pipeline with Scrape.do, Apollo.io & OpenAI

Lead Sourcing by Job Posts For Outreach With Scrape.do API & Open AI & Google Sheets Overview This n8n workflow automates the complete lead generation process by scraping job postings from Indeed, enriching company data via Apollo.io, identifying decision-makers, and generating personalized LinkedIn outreach messages using OpenAI. It integrates with Scrape.do for reliable web scraping, Apollo.io for B2B data enrichment, OpenAI for AI-powered personalization, and Google Sheets for centralized data storage. Perfect for: Sales teams, recruiters, business development professionals, and marketing agencies looking to automate their outbound prospecting pipeline. --- Workflow Components ⏰ Schedule Trigger | Property | Value | |----------|-------| | Type | Schedule Trigger | | Purpose | Automatically initiates workflow on a recurring schedule | | Frequency | Weekly (Every Monday) | | Time | 00:00 UTC | Function: Ensures consistent, hands-off lead generation by running the pipeline automatically without manual intervention. --- πŸ” Scrape.do Indeed API | Property | Value | |----------|-------| | Type | HTTP Request (GET) | | Purpose | Scrapes job listings from Indeed via Scrape.do proxy API | | Endpoint | https://api.scrape.do | | Output Format | Markdown | Request Parameters: | Parameter | Value | Description | |-----------|-------|-------------| | token | API Token | Scrape.do authentication | | url | Indeed Search URL | Target job search page | | super | true | Uses residential proxies | | geoCode | us | US-based content | | render | true | JavaScript rendering enabled | | device | mobile | Mobile viewport for cleaner HTML | | output | markdown | Lightweight text output | Function: Fetches Indeed job listings with anti-bot bypass, returning clean markdown for easy parsing. --- πŸ“‹ Parse Indeed Jobs | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Extracts structured job data from markdown | | Mode | Run once for all items | Extracted Fields: | Field | Description | Example | |-------|-------------|---------| | jobTitle | Position title | "Senior Data Engineer" | | jobUrl | Indeed job link | "https://indeed.com/viewjob?jk=abc123" | | jobId | Indeed job identifier | "abc123" | | companyName | Hiring company | "Acme Corporation" | | location | City, State | "San Francisco, CA" | | salary | Pay range | "$120,000 - $150,000" | | jobType | Employment type | "Full-time" | | source | Data source | "Indeed" | | dateFound | Scrape date | "2025-01-15" | Function: Parses markdown using regex patterns, filters invalid entries, and deduplicates by company name. --- πŸ“Š Add New Company (Google Sheets) | Property | Value | |----------|-------| | Type | Google Sheets Node | | Purpose | Stores parsed job postings for tracking | | Operation | Append rows | | Target Sheet | "Add New Company" | Function: Creates a historical record of all discovered job postings and companies for pipeline tracking. --- 🏒 Apollo Organization Search | Property | Value | |----------|-------| | Type | HTTP Request (POST) | | Purpose | Enriches company data via Apollo.io API | | Endpoint | https://api.apollo.io/v1/organizations/search | | Authentication | HTTP Header Auth (x-api-key) | Request Body: json { "qorganizationname": "Company Name", "page": 1, "per_page": 1 } Response Fields: | Field | Description | |-------|-------------| | id | Apollo organization ID | | name | Official company name | | website_url | Company website | | linkedin_url | LinkedIn company page | | industry | Business sector | | estimatednumemployees | Company size | | founded_year | Year established | | city, state, country | Location details | | short_description | Company overview | Function: Retrieves comprehensive company intelligence including LinkedIn profiles, industry classification, and employee count. --- πŸ“€ Extract Apollo Org Data | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Parses Apollo response and merges with original data | | Mode | Run once for each item | Function: Extracts relevant fields from Apollo API response and combines with job posting data for downstream processing. --- πŸ‘₯ Apollo People Search | Property | Value | |----------|-------| | Type | HTTP Request (POST) | | Purpose | Finds decision-makers at target companies | | Endpoint | https://api.apollo.io/v1/mixed_people/search | | Authentication | HTTP Header Auth (x-api-key) | Request Body: json { "organizationids": ["apolloorg_id"], "person_titles": [ "CTO", "Chief Technology Officer", "VP Engineering", "Head of Engineering", "Engineering Manager", "Technical Director", "CEO", "Founder" ], "page": 1, "per_page": 3 } Response Fields: | Field | Description | |-------|-------------| | first_name | Contact first name | | last_name | Contact last name | | title | Job title | | email | Email address | | linkedin_url | LinkedIn profile URL | | phone_number | Direct phone | Function: Identifies key stakeholders and decision-makers based on configurable title filters. --- πŸ“ Format Leads | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Structures lead data for outreach | | Mode | Run once for all items | Function: Combines person data with company context, creating comprehensive lead profiles ready for personalization. --- πŸ€– Generate Personalized Message (OpenAI) | Property | Value | |----------|-------| | Type | OpenAI Node | | Purpose | Creates custom LinkedIn connection messages | | Model | gpt-4o-mini | | Max Tokens | 150 | | Temperature | 0.7 | System Prompt: You are a professional outreach specialist. Write personalized LinkedIn connection request messages. Keep messages under 300 characters. Be friendly, professional, and mention a specific reason for connecting based on their role and company. User Prompt Variables: | Variable | Source | |----------|--------| | Name | $json.fullName | | Title | $json.title | | Company | $json.companyName | | Industry | $json.industry | | Job Context | $json.jobTitle | Function: Generates unique, contextual outreach messages that reference specific hiring activity and company details. --- πŸ”— Merge Lead + Message | Property | Value | |----------|-------| | Type | Code Node (JavaScript) | | Purpose | Combines lead data with generated message | | Mode | Run once for each item | Function: Merges OpenAI response with lead profile, creating the final enriched record. --- πŸ’Ύ Save Leads to Sheet | Property | Value | |----------|-------| | Type | Google Sheets Node | | Purpose | Stores final lead data with personalized messages | | Operation | Append rows | | Target Sheet | "Leads" | Data Mapping: | Column | Data | |--------|------| | First Name | Lead's first name | | Last Name | Lead's last name | | Title | Job title | | Company | Company name | | LinkedIn URL | Profile link | | Country | Location | | Industry | Business sector | | Date Added | Timestamp | | Source | "Indeed + Apollo" | | Personalized Message | AI-generated outreach text | Function: Creates actionable lead database ready for outreach campaigns. --- Workflow Flow ⏰ Schedule Trigger β”‚ β–Ό πŸ” Scrape.do Indeed API ──► Fetches job listings with JS rendering β”‚ β–Ό πŸ“‹ Parse Indeed Jobs ──► Extracts company names, job details β”‚ β–Ό πŸ“Š Add New Company ──► Saves to Google Sheets (Companies) β”‚ β–Ό 🏒 Apollo Org Search ──► Enriches company data β”‚ β–Ό πŸ“€ Extract Apollo Org Data ──► Parses API response β”‚ β–Ό πŸ‘₯ Apollo People Search ──► Finds decision-makers β”‚ β–Ό πŸ“ Format Leads ──► Structures lead profiles β”‚ β–Ό πŸ€– Generate Personalized Message ──► AI creates custom outreach β”‚ β–Ό πŸ”— Merge Lead + Message ──► Combines all data β”‚ β–Ό πŸ’Ύ Save Leads to Sheet ──► Final storage (Leads) --- Configuration Requirements API Keys & Credentials | Credential | Purpose | Where to Get | |------------|---------|--------------| | Scrape.do API Token | Web scraping with anti-bot bypass | scrape.do/dashboard | | Apollo.io API Key | B2B data enrichment | apollo.io/settings/integrations | | OpenAI API Key | AI message generation | platform.openai.com | | Google Sheets OAuth2 | Data storage | n8n Credentials Setup | n8n Credential Setup | Credential Type | Configuration | |-----------------|---------------| | HTTP Header Auth (Apollo) | Header: x-api-key, Value: Your Apollo API key | | OpenAI API | API Key: Your OpenAI API key | | Google Sheets OAuth2 | Complete OAuth flow with Google | --- Key Features πŸ” Intelligent Job Scraping Anti-Bot Bypass: Residential proxy rotation via Scrape.do JavaScript Rendering: Full headless browser for dynamic content Mobile Optimization: Cleaner HTML with mobile viewport Markdown Output: Lightweight, easy-to-parse format 🏒 B2B Data Enrichment Company Intelligence: Industry, size, location, LinkedIn Decision-Maker Discovery: Title-based filtering Contact Information: Email, phone, LinkedIn profiles Real-Time Data: Fresh information from Apollo.io πŸ€– AI-Powered Personalization Contextual Messages: References specific hiring activity Character Limit: Optimized for LinkedIn (300 chars) Variable Temperature: Balanced creativity and consistency Role-Specific: Tailored to recipient's title and company πŸ“Š Automated Data Management Dual Sheet Storage: Companies + Leads separation Timestamp Tracking: Historical records Deduplication: Prevents duplicate entries Ready for Export: CSV-compatible format --- Use Cases 🎯 Sales Prospecting Identify companies actively hiring in your target market Find decision-makers at companies investing in growth Generate personalized cold outreach at scale Track pipeline from discovery to contact πŸ‘₯ Recruiting & Talent Acquisition Monitor competitor hiring patterns Identify companies building specific teams Connect with hiring managers directly Build talent pipeline relationships πŸ“ˆ Market Intelligence Track industry hiring trends Monitor competitor expansion signals Identify emerging market opportunities Benchmark salary ranges by role 🀝 Partnership Development Find companies investing in complementary areas Identify potential integration partners Connect with technical leadership Build strategic relationship pipeline --- Technical Notes | Specification | Value | |---------------|-------| | Processing Time | 2-5 minutes per run (depending on job count) | | Jobs per Run | ~25 unique companies | | API Calls per Run | 1 Scrape.do + ~25 Apollo Org + ~25 Apollo People + ~75 OpenAI | | Data Accuracy | 90%+ for company matching | | Success Rate | 99%+ with proper error handling | Rate Limits to Consider | Service | Free Tier Limit | Recommendation | |---------|-----------------|----------------| | Scrape.do | 1,000 credits/month | ~40 runs/month | | Apollo.io | 100 requests/day | Add Wait nodes if needed | | OpenAI | Based on usage | Monitor costs (~$0.01-0.05/run) | | Google Sheets | 300 requests/minute | No issues expected | --- Setup Instructions Step 1: Import Workflow Copy the JSON workflow configuration In n8n: Workflows β†’ Import from JSON Paste configuration and save Step 2: Configure Scrape.do Sign up at scrape.do Navigate to Dashboard β†’ API Token Copy your token Token is embedded in URL query parameter (already configured) To customize search: Change the url parameter in "Scrape.do Indeed API" node: q=data+engineer (search term) l=Remote (location) fromage=7 (last 7 days) Step 3: Configure Apollo.io Sign up at apollo.io Go to Settings β†’ Integrations β†’ API Keys Create new API key In n8n: Credentials β†’ Add Credential β†’ Header Auth Name: x-api-key Value: Your Apollo API key Select this credential in both Apollo HTTP nodes Step 4: Configure OpenAI Go to platform.openai.com Create new API key In n8n: Credentials β†’ Add Credential β†’ OpenAI Paste API key Select credential in "Generate Personalized Message" node Step 5: Configure Google Sheets Create new Google Spreadsheet Create two sheets: Sheet 1: "Add New Company" Columns: companyName | jobTitle | jobUrl | location | salary | source | postedDate Sheet 2: "Leads" Columns: First Name | Last Name | Title | Company | LinkedIn URL | Country | Industry | Date Added | Source | Personalized Message Copy Sheet ID from URL In n8n: Credentials β†’ Add Credential β†’ Google Sheets OAuth2 Update both Google Sheets nodes with your Sheet ID Step 6: Test and Activate Manual Test: Click "Execute Workflow" button Verify Each Node: Check outputs step by step Review Data: Confirm data appears in Google Sheets Activate: Toggle workflow to "Active" --- Error Handling Common Issues | Issue | Cause | Solution | |-------|-------|----------| | "Invalid character: " | Empty/malformed company name | Check Parse Indeed Jobs output | | "Node does not have credentials" | Credential not linked | Open node β†’ Select credential | | Empty Parse Results | Indeed HTML structure changed | Check Scrape.do raw output | | Apollo Rate Limit (429) | Too many requests | Add 5-10s Wait node between calls | | OpenAI Timeout | Too many tokens | Reduce batch size or max_tokens | | "Your request is invalid" | Malformed JSON body | Verify expression syntax in HTTP nodes | Troubleshooting Steps Verify Credentials: Test each credential individually Check Node Outputs: Use "Execute Node" for debugging Monitor API Usage: Check Apollo and OpenAI dashboards Review Logs: Check n8n execution history for details Test with Sample: Use known company name to verify Apollo Recommended Error Handling Additions For production use, consider adding: IF node after Apollo Org Search to handle empty results Error Workflow trigger for notifications Wait nodes between API calls for rate limiting Retry logic for transient failures --- Performance Specifications | Metric | Value | |--------|-------| | Execution Time | 2-5 minutes per scheduled run | | Jobs Discovered | ~25 per Indeed page | | Leads Generated | 1-3 per company (based on title matches) | | Message Quality | Professional, contextual, <300 chars | | Data Freshness | Real-time from Indeed + Apollo | | Storage Format | Google Sheets (unlimited rows) | --- API Reference Scrape.do API | Endpoint | Method | Purpose | |----------|--------|---------| | https://api.scrape.do | GET | Direct URL scraping | Documentation: [scrape.do/documentation Apollo.io API | Endpoint | Method | Purpose | |----------|--------|---------| | /v1/organizations/search | POST | Company lookup | | /v1/mixed_people/search | POST | People search | Documentation: apolloio.github.io/apollo-api-docs OpenAI API | Endpoint | Method | Purpose | |----------|--------|---------| | /v1/chat/completions | POST | Message generation | Documentation: [platform.openai.com

OnurBy Onur
37
All templates loaded