Back to Catalog

Scrape LinkedIn jobs with Gemini AI and store in Google Sheets using RSS

Abdullah AlshiekhAbdullah Alshiekh
1589 views
2/3/2026
Official Page

What Problem Does it Solve

This workflow automates the process of finding and collecting job postings from LinkedIn, eliminating the need for manual job searching. Itโ€™s designed to save time and ensure you donโ€™t miss out on new opportunities by automatically populating a spreadsheet with key job details.

Key Features

  • Automated Data Collection: The workflow pulls job posts from a LinkedIn search via an RSS feed.

  • Intelligent Data Extraction: It scrapes the full job description and uses AI to summarize the key benefits and job responsibilities into a concise format.

  • Centralized Database: All collected and processed information is automatically saved to a Google Sheet, providing a single source of truth for your job search.

How It Works

The workflow starts when manually triggered. It reads the job posts from a given RSS feed, processing each one individually. For each job, it fetches the full webpage content to extract structured data. This data is then cleaned and passed to an AI model, which generates a brief summary of the job and its benefits. Finally, a new row is either added or updated in a Google Sheet with all the collected details, including the job title, company name, and AI-generated summary.

Configuration & Customization

This workflow is highly customizable to fit your specific needs.

  • RSS Feed: To get started, you'll need to provide the RSS feed URL for your desired LinkedIn job search. We can help you set this up.

  • AI Model: The workflow uses Google Gemini by default, but it can be adjusted to work with other AI platforms.

  • Data Destination: The output is configured to a Google Sheet, but it can easily be changed to a different platform like Notion or a CRM.

  • AI Prompting: The AI's instructions are customizable, so you can tailor the output to extract different information or match a specific tone.

If you need any help Get In Touch

Scrape LinkedIn Jobs with Gemini AI and Store in Google Sheets Using RSS

This n8n workflow automates the process of scraping LinkedIn job listings, enriching them with AI-generated summaries and sentiment analysis using Google Gemini, and then storing the processed data in a Google Sheet. It's designed to help you keep track of relevant job opportunities efficiently.

What it does

  1. Triggers Manually: The workflow starts when manually executed.
  2. Reads RSS Feed: It fetches the latest job postings from a specified RSS feed (e.g., a LinkedIn job search RSS feed).
  3. Loops Over Items: Each job posting from the RSS feed is processed individually in a loop.
  4. Extracts HTML Content: For each job, it makes an HTTP request to the job's URL to fetch its full HTML content.
  5. Parses HTML: It then extracts specific details from the HTML, such as the job description, using CSS selectors.
  6. Enriches with AI (Google Gemini):
    • It sends the extracted job description to a Google Gemini Chat Model.
    • An AI Agent is used to generate a concise summary of the job description and perform sentiment analysis (e.g., "positive," "negative," or "neutral").
    • A Structured Output Parser ensures the AI's response is formatted correctly (e.g., as JSON with summary and sentiment fields).
  7. Formats Data: A Code node transforms and combines the original RSS feed data with the AI-generated summary and sentiment into a structured format suitable for Google Sheets.
  8. Appends to Google Sheets: Finally, it appends the processed job data (including title, link, summary, and sentiment) as a new row in a specified Google Sheet.

Prerequisites/Requirements

  • n8n Instance: A running n8n instance.
  • Google Sheets Account: A Google account with access to Google Sheets. You'll need to create a new spreadsheet or specify an existing one for the job data.
  • Google Gemini API Key: An API key for Google Gemini (configured as a credential in n8n).
  • LinkedIn Job Search RSS Feed: A URL for a LinkedIn job search RSS feed. You can generate this by performing a search on LinkedIn and looking for an RSS icon or a way to subscribe to the search results.

Setup/Usage

  1. Import the Workflow:
    • Download the provided JSON workflow.
    • In your n8n instance, go to "Workflows" and click "New".
    • Click the three dots in the top right corner and select "Import from JSON".
    • Paste the JSON content or upload the file.
  2. Configure Credentials:
    • Google Sheets: Set up a Google Sheets credential. This typically involves OAuth2 authentication to grant n8n access to your Google Sheets.
    • Google Gemini Chat Model: Configure a Google Gemini credential using your API key.
  3. Update Node Parameters:
    • RSS Read (Node 37): Update the "URL" parameter with your LinkedIn job search RSS feed URL.
    • HTTP Request (Node 19): Ensure the URL is correctly mapped to {{ $json.link }} to fetch the content of each job link.
    • HTML (Node 842):
      • Adjust the "CSS Selector" for extracting the job description if LinkedIn's HTML structure changes. You might need to inspect a LinkedIn job page to find the correct selector for the main job description content.
    • AI Agent (Node 1119): Review the prompt for the AI Agent to ensure it generates the desired summary and sentiment.
    • Google Sheets (Node 18):
      • Specify the "Spreadsheet ID" of your target Google Sheet.
      • Specify the "Sheet Name" where the data should be appended.
      • Ensure the "Values" are correctly mapped from the Code node's output (e.g., {{ $json.title }}, {{ $json.link }}, {{ $json.summary }}, {{ $json.sentiment }}).
  4. Execute the Workflow:
    • Click "Execute Workflow" in the "Manual Trigger" node to run the workflow.
    • For continuous monitoring, consider replacing the "Manual Trigger" with a "Cron" node to run at scheduled intervals.

Related Templates

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch ๐Ÿ”— LinkedIn โ€“ Wessel Bulte

Wessel BulteBy Wessel Bulte
247

AI multi-agent executive team for entrepreneurs with Gemini, Perplexity and WhatsApp

This workflow is an AI-powered multi-agent system built for startup founders and small business owners who want to automate decision-making, accountability, research, and communication, all through WhatsApp. The โ€œvirtual executive team,โ€ is designed to help small teams to work smarter. This workflow sends you market analysis, market and sales tips, It can also monitor what your competitors are doing using perplexity (Research agent) and help you stay a head, or make better decisions. And when you feeling stuck with your start-up accountability director is creative enough to break the barrier ๐ŸŽฏ Core Features ๐Ÿง‘โ€๐Ÿ’ผ 1. President (Super Agent) Acts as the main controller that coordinates all sub-agents. Routes messages, assigns tasks, and ensures workflow synchronization between the AI Directors. ๐Ÿ“Š 2. Sales & Marketing Director Uses SerpAPI to search for market opportunities, leads, and trends. Suggests marketing campaigns, keywords, or outreach ideas. Can analyze current engagement metrics to adjust content strategy. ๐Ÿ•ต๏ธโ€โ™€๏ธ 3. Business Research Director Powered by Perplexity AI for competitive and market analysis. Monitors competitor moves, social media engagement, and product changes. Provides concise insights to help the founder adapt and stay ahead. โฐ 4. Accountability Director Keeps the founder and executive team on track. Sends motivational nudges, task reminders, and progress reports. Promotes consistency and discipline โ€” key traits for early-stage success. ๐Ÿ—“๏ธ 5. Executive Secretary Handles scheduling, email drafting, and reminders. Connects with Google Calendar, Gmail, and Sheets through OAuth. Automates follow-ups, meeting summaries, and notifications directly via WhatsApp. ๐Ÿ’ฌ WhatsApp as the Main Interface Interact naturally with your AI team through WhatsApp Business API. All responses, updates, and summaries are delivered to your chat. Ideal for founders who want to manage operations on the go. โš™๏ธ How It Works Trigger: The workflow starts from a WhatsApp Trigger node (via Meta Developer Account). Routing: The President agent analyzes the incoming message and determines which Director should handle it. Processing: Marketing or sales queries go to the Sales & Marketing Director. Research questions are handled by the Business Research Director. Accountability tasks are assigned to the Accountability Director. Scheduling or communication requests are managed by the Secretary. Collaboration: Each sub-agent returns results to the President, who summarizes and sends the reply back via WhatsApp. Memory: Context is maintained between sessions, ensuring personalized and coherent communication. ๐Ÿงฉ Integrations Required Gemini API โ€“ for general intelligence and task reasoning Supabase- for RAG and postgres persistent memory Perplexity API โ€“ for business and competitor analysis SerpAPI โ€“ for market research and opportunity scouting Google OAuth โ€“ to connect Sheets, Calendar, and Gmail WhatsApp Business API โ€“ for message triggers and responses ๐Ÿš€ Benefits Acts like a team of tireless employees available 24/7. Saves time by automating research, reminders, and communication. Enhances accountability and strategy consistency for founders. Keeps operations centralized in a simple WhatsApp interface. ๐Ÿงฐ Setup Steps Create API credentials for: WhatsApp (via Meta Developer Account) Gemini, Perplexity, and SerpAPI Google OAuth (Sheets, Calendar, Gmail) Create a supabase account at supabase Add the credentials in the corresponding n8n nodes. Customize the system prompts for each Director based on your startupโ€™s needs. Activate and start interacting with your virtual executive team on WhatsApp. Use Case You are a small organisation or start-up that can not afford hiring; marketing department, research department and secretar office, then this workflow is for you ๐Ÿ’ก Need Customization? Want to tailor it for your startup or integrate with CRM tools like Notion or HubSpot? You can easily extend the workflow or contact the creator for personalized support. Consider adjusting the system prompt to suite your business

ShadrackBy Shadrack
331

Automated YouTube video uploads with 12h interval scheduling in JST

This workflow automates a batch upload of multiple videos to YouTube, spacing each upload 12 hours apart in Japan Standard Time (UTC+9) and automatically adding them to a playlist. โš™๏ธ Workflow Logic Manual Trigger โ€” Starts the workflow manually. List Video Files โ€” Uses a shell command to find all .mp4 files under the specified directory (/opt/downloads/ๅ•่ฏๅก/A1-A2). Sort and Generate Items โ€” Sorts videos by day number (dayXX) extracted from filenames and assigns a sequential order value. Calculate Publish Schedule (+12h Interval) โ€” Computes the next rounded JST hour plus a configurable buffer (default 30 min). Staggers each videoโ€™s scheduled time by order ร— 12 hours. Converts JST back to UTC for YouTubeโ€™s publishAt field. Split in Batches (1 per video) โ€” Iterates over each video item. Read Video File โ€” Loads the corresponding video from disk. Upload to YouTube (Scheduled) โ€” Uploads the video privately with the computed publishAtUtc. Add to Playlist โ€” Adds the newly uploaded video to the target playlist. ๐Ÿ•’ Highlights Timezone-safe: Pure UTC โ†” JST conversion avoids double-offset errors. Sequential scheduling: Ensures each upload is 12 hours apart to prevent clustering. Customizable: Change SPANHOURS, BUFFERMIN, or directory paths easily. Retry-ready: Each upload and playlist step has retry logic to handle transient errors. ๐Ÿ’ก Typical Use Cases Multi-part educational video series (e.g., A1โ€“A2 English learning). Regular content release cadence without manual scheduling. Automated YouTube publishing pipelines for pre-produced content. --- Author: Zane Category: Automation / YouTube / Scheduler Timezone: JST (UTC+09:00)

ZaneBy Zane
226