Back to Catalog

Automate research paper collection with Bright Data & n8n

Yaron BeenYaron Been
6063 views
2/3/2026
Official Page

Description

This workflow automatically collects and organizes research papers from academic databases and journals into Google Sheets. It helps researchers and students save time by eliminating manual searches across multiple academic sources and centralizing research materials.

Overview

This workflow automatically scrapes research papers from academic databases and journals, then organizes them in Google Sheets. It uses Bright Data to access academic sources and extracts key information like titles, authors, abstracts, and citations.

Tools Used

  • n8n: The automation platform that orchestrates the workflow.
  • Bright Data: For scraping academic websites and research databases without getting blocked.
  • Google Sheets: For organizing and storing research paper data.

How to Install

  1. Import the Workflow: Download the .json file and import it into your n8n instance.
  2. Configure Bright Data: Add your Bright Data credentials to the Bright Data node.
  3. Connect Google Sheets: Authenticate your Google account.
  4. Customize: Specify research topics, journals, or authors to track.

Use Cases

  • Academic Researchers: Stay updated on new papers in your field.
  • Students: Collect research for literature reviews and dissertations.
  • Research Teams: Collaborate on literature databases.

Connect with Me

#n8n #automation #research #academicpapers #brightdata #googlesheets #researchpapers #academicresearch #literaturesearch #scholarlyarticles #n8nworkflow #workflow #nocode #researchautomation #academicscraping #researchtools #papertracking #academicjournals #researchdatabase #literaturereview #academicwriting #datascraping #researchorganization #scholarlyresearch #citationmanagement #academicproductivity

n8n Workflow: Automate Research Paper Collection with Bright Data

This n8n workflow automates the process of collecting research paper data from a specified URL, extracting key information, and storing it in a Google Sheet. It's designed to streamline the data acquisition for research purposes, leveraging web scraping capabilities.

What it does

This workflow performs the following steps:

  1. Manual Trigger: Initiates the workflow manually when "Execute workflow" is clicked.
  2. HTTP Request: Fetches the content of a specified URL (likely a research paper listing or search results page).
  3. HTML Parsing: Processes the HTML content received from the HTTP Request node to extract structured data. This node is configured to specifically target elements within the HTML, such as titles, links, and other relevant information about research papers.
  4. Code (JavaScript): Executes custom JavaScript code to further process or transform the extracted data. This could involve cleaning data, reformatting it, or performing additional logic before storage.
  5. Edit Fields (Set): Modifies, adds, or removes fields from the incoming data. This is used to standardize the data structure before writing to Google Sheets.
  6. Google Sheets: Appends the processed research paper data as new rows to a designated Google Sheet.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance (cloud or self-hosted).
  • Google Account: A Google account with access to Google Sheets.
  • Google Sheets Credential: An n8n credential configured for Google Sheets (OAuth 2.0 or Service Account).
  • Target URL: The URL of the website containing the research papers you wish to scrape.
  • Bright Data (Optional, Implied by Directory Name): While not explicitly present in the JSON, the directory name "5221-automate-research-paper-collection-with-bright-data--n8n" suggests that Bright Data might be used as a proxy or web scraping service in conjunction with the HTTP Request node for more robust scraping. If you intend to use Bright Data, you would need an account and potentially configure the HTTP Request node to route through their proxy network.

Setup/Usage

  1. Import the Workflow: Download the provided JSON and import it into your n8n instance.
  2. Configure Credentials:
    • Set up your Google Sheets credential in n8n if you haven't already.
  3. Configure Nodes:
    • HTTP Request: Update the URL parameter to the specific research paper page you want to scrape. You may also need to configure headers or authentication if the target website requires it. If using Bright Data, configure proxy settings here.
    • HTML: Review and adjust the CSS selectors within this node to accurately extract the desired data (e.g., paper titles, author names, publication dates, links) from the target website's HTML structure.
    • Code: If you need specific data transformations, modify the JavaScript code within this node.
    • Edit Fields (Set): Ensure the fields are set correctly to match the columns in your Google Sheet.
    • Google Sheets:
      • Select your Google Sheets credential.
      • Specify the Spreadsheet ID of your target Google Sheet.
      • Specify the Sheet Name where the data should be appended.
      • Ensure the Operation is set to "Append Row" or a similar action.
  4. Create Google Sheet: Create a new Google Sheet with appropriate column headers (e.g., "Title", "URL", "Authors", "Date") that match the data fields being output by the "Edit Fields (Set)" node.
  5. Execute Workflow: Click "Execute Workflow" on the "Manual Trigger" node to run the workflow and start collecting data.

Related Templates

Track competitor SEO keywords with Decodo + GPT-4.1-mini + Google Sheets

This workflow automates competitor keyword research using OpenAI LLM and Decodo for intelligent web scraping. Who this is for SEO specialists, content strategists, and growth marketers who want to automate keyword research and competitive intelligence. Marketing analysts managing multiple clients or websites who need consistent SEO tracking without manual data pulls. Agencies or automation engineers using Google Sheets as an SEO data dashboard for keyword monitoring and reporting. What problem this workflow solves Tracking competitor keywords manually is slow and inconsistent. Most SEO tools provide limited API access or lack contextual keyword analysis. This workflow solves that by: Automatically scraping any competitor’s webpage with Decodo. Using OpenAI GPT-4.1-mini to interpret keyword intent, density, and semantic focus. Storing structured keyword insights directly in Google Sheets for ongoing tracking and trend analysis. What this workflow does Trigger — Manually start the workflow or schedule it to run periodically. Input Setup — Define the website URL and target country (e.g., https://dev.to, france). Data Scraping (Decodo) — Fetch competitor web content and metadata. Keyword Analysis (OpenAI GPT-4.1-mini) Extract primary and secondary keywords. Identify focus topics and semantic entities. Generate a keyword density summary and SEO strength score. Recommend optimization and internal linking opportunities. Data Structuring — Clean and convert GPT output into JSON format. Data Storage (Google Sheets) — Append structured keyword data to a Google Sheet for long-term tracking. Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n account with workflow editor access Decodo API credentials OpenAI API key Google Sheets account connected via OAuth2 Make sure to install the Decodo Community node. Create a Google Sheet Add columns for: primarykeywords, seostrengthscore, keyworddensity_summary, etc. Share with your n8n Google account. Connect Credentials Add credentials for: Decodo API credentials - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard OpenAI API (for GPT-4o-mini) Google Sheets OAuth2 Configure Input Fields Edit the “Set Input Fields” node to set your target site and region. Run the Workflow Click Execute Workflow in n8n. View structured results in your connected Google Sheet. How to customize this workflow Track Multiple Competitors → Use a Google Sheet or CSV list of URLs; loop through them using the Split In Batches node. Add Language Detection → Add a Gemini or GPT node before keyword analysis to detect content language and adjust prompts. Enhance the SEO Report → Expand the GPT prompt to include backlink insights, metadata optimization, or readability checks. Integrate Visualization → Connect your Google Sheet to Looker Studio for SEO performance dashboards. Schedule Auto-Runs → Use the Cron Node to run weekly or monthly for competitor keyword refreshes. Summary This workflow automates competitor keyword research using: Decodo for intelligent web scraping OpenAI GPT-4.1-mini for keyword and SEO analysis Google Sheets for live tracking and reporting It’s a complete AI-powered SEO intelligence pipeline ideal for teams that want actionable insights on keyword gaps, optimization opportunities, and content focus trends, without relying on expensive SEO SaaS tools.

Ranjan DailataBy Ranjan Dailata
161

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte

Wessel BulteBy Wessel Bulte
247