Create an RSS feed based on a website's content
This workflow parses content from a website (for this example, Baserow's release page) and creates an RSS feed based on the extracted data.
Prerequisites
- Some familiarity with HTML and CSS selectors
Nodes
- Webhook node triggers the workflow when new content (a new Baserow release) is published on a website.
- Set nodes set the required URLs and links for the RSS feed.
- HTTP Request node fetches data from a specified website page.
- HTML Extract nodes extract the posts and their fields (such as date, title, description, and link) from the website.
- Item Lists node iterates over each post on the page.
- Date & Time node converts the date of the post to a different format.
- Function Item node creates RSS items for each post.
- Function node creates the response code for the RSS feed.
- Respond to Webhook node returns the RSS feed in response to the Webhook node.
The result of this workflow would look like this:
Create an RSS Feed from Website Content
This n8n workflow allows you to generate a custom RSS feed by scraping content from a specified website. It's ideal for tracking updates on websites that don't offer their own RSS feeds, or for consolidating specific information into a single feed.
What it does
This workflow automates the following steps:
- Triggers on Demand: The workflow is manually triggered or can be configured to run on a schedule.
- Fetches Website Content: It makes an HTTP request to a specified URL to retrieve the HTML content of the webpage.
- Extracts Relevant Data: Using HTML Extract, it parses the fetched HTML to pull out specific elements like titles, links, and descriptions based on CSS selectors.
- Formats Data: It processes the extracted data, potentially cleaning it up or transforming it into a structured format suitable for an RSS feed. This includes adding current timestamps.
- Generates RSS Feed XML: It dynamically constructs an RSS feed XML string from the processed data.
- Responds with RSS Feed: The generated RSS feed XML is then returned as the response to a webhook, making it accessible via a URL.
Prerequisites/Requirements
- n8n Instance: A running instance of n8n.
- Target Website: The URL of the website you want to scrape for content.
- CSS Selectors: Knowledge of CSS selectors to identify the specific elements (titles, links, descriptions) on the target website you wish to include in your RSS feed.
Setup/Usage
- Import the Workflow: Import the provided JSON into your n8n instance.
- Configure the
HTTP RequestNode:- Set the
URLto the website you want to scrape.
- Set the
- Configure the
HTML ExtractNode:- Define the CSS selectors to extract the desired data (e.g., article titles, links, summaries). You will likely need to inspect the target website's HTML to find the correct selectors.
- Configure the
FunctionandFunction ItemNodes:- These nodes contain JavaScript code to process the extracted data and format it for the RSS feed. You may need to adjust this code based on the specific structure of the data extracted by the
HTML Extractnode and the desired RSS feed structure.
- These nodes contain JavaScript code to process the extracted data and format it for the RSS feed. You may need to adjust this code based on the specific structure of the data extracted by the
- Activate the Workflow: Once configured, activate the workflow.
- Access the RSS Feed: The
Webhooknode will provide a unique URL. When you access this URL, the workflow will execute and return the generated RSS feed XML. You can then use this URL in any RSS reader.
Note: Be mindful of the website's robots.txt and terms of service when scraping content. Excessive or unauthorized scraping can lead to your IP being blocked.
Related Templates
Create an offline DIGIPIN microservice API for precise location mapping in India
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. What is DIGIPIN? DIGIPIN (Digital Pincode) is a 10-character alphanumeric code introduced by India Post. It maps any 3x3 meter square in India to a unique digital address. This helps precisely locate homes, shops, or landmarks, especially in areas where physical addresses are inconsistent or missing. What this workflow does This workflow creates a fully offline DIGIPIN microservice using only JavaScript - no external APIs are used. You get two HTTP endpoints: GET /generate-digipin?lat={latitude}&lon={longitude} → returns a DIGIPIN GET /decode-digipin?digipin={code} → returns the latitude and longitude You can plug this into any system to: Convert GPS coordinates to a DIGIPIN Convert a DIGIPIN back to coordinates How it works An HTTP Webhook node receives the request A JS Function node either encodes or decodes based on input The result is returned as a JSON response All the logic is handled inside the workflow - no API keys, no external calls. Why use this Fast and lightweight Easily extendable: you can connect this to forms, CRMs, apps, or spreadsheets Ideal for field agents, address validation, logistics, or rural operations
Get email notifications for newly uploaded Google Drive files
This workflow sends out email notifications when a new file has been uploaded to Google Drive. The workflow uses two nodes: Google Drive Trigger: This node will trigger the workflow whenever a new file has been uploaded to a given folder Send Email: This node sends out the email using data from the previous Google Drive Trigger node.
Automate testimonials in Strapi with n8n
This is the workflow powering the n8n demo shown at StrapiConf 2022. The workflow searches matching Tweets every 30 minutes using the Interval node and listens to Form submissions using the Webhook node. Sentiment analysis is handled by Google using the Google Cloud Natural Language node before the result is stored in Strapi using the Strapi node. (These were originally two separate workflows that have been combined into one to simplify sharing.)