Add articles to a Notion list by accessing a Discord slash command
This workflow allows you to add articles to a Notion reading list by accessing a Discord slash command.
Prerequisites
- A Notion account and credentials, and a reading list similar to this template.
- A Discord account and credentials, and Discord Slash Command connected to n8n.
Nodes
- Webhook node triggers the workflow whenever the Discord Slash command is issued.
- IF node checks the type returned by Discord. If the type is not equal to 1, it will return true, otherwise false.
- HTTP Request node makes an HTTP call to the link and gets the HTML of the webpage.
- HTML Extract node extracts the title from the HTML which we will use in the next node.
- Notion node adds the link to your Notion reading list.
- Set nodes set the reply values for Discord and register the Interaction Endpoint URL.
Add Articles to a Notion List by Accessing a Discord Slash Command
This n8n workflow allows you to add new articles to a Notion database by simply using a Discord slash command. It extracts information from a provided URL and creates a new entry in your specified Notion list.
What it does
- Listens for a Webhook Trigger: The workflow is activated by an incoming webhook, typically from a Discord slash command integration.
- Extracts URL from Request: It expects the Discord slash command to pass a URL as part of its payload.
- Fetches Article Content: It makes an HTTP request to the provided URL to retrieve the article's HTML content.
- Extracts Title and Description: It uses HTML extraction to pull the article's title and a meta description from the fetched HTML.
- Formats Data: It prepares the extracted data into a structured format suitable for Notion.
- Adds to Notion Database: Finally, it creates a new page (article entry) in your designated Notion database with the extracted title, description, and the original URL.
Prerequisites/Requirements
- n8n Instance: A running n8n instance.
- Discord Account: A Discord server where you can set up a slash command.
- Notion Account: A Notion workspace with a database (list) where you want to add articles.
- Notion Integration: An n8n Notion credential configured to connect to your Notion workspace and database.
- Discord Webhook Integration: A Discord application or bot configured to send slash command payloads to the n8n webhook URL.
Setup/Usage
- Import the Workflow: Import the provided JSON into your n8n instance.
- Configure Webhook:
- The "Webhook" node (ID: 47) is your trigger. Copy its test or production URL.
- In Discord, set up a slash command (e.g.,
/addarticle) that sends a POST request to this webhook URL. The command should include an option for a URL, which will be passed in the webhook payload (e.g.,{{$json.data.options[0].value}}if the URL is the first option).
- Configure Notion Credential:
- Open the "Notion" node (ID: 487).
- Select or create a new Notion API credential. Ensure it has access to the Notion database you intend to use.
- Specify the
Database IDof your Notion article list. - Map the properties in the Notion node to the incoming data (e.g.,
titleto the Notion title property,descriptionto a text property,urlto a URL property).
- Activate the Workflow: Save and activate the workflow in n8n.
- Test: Go to your Discord server and use your configured slash command, providing an article URL. The workflow should execute, and a new entry should appear in your Notion database.
Related Templates
Automate Dutch Public Procurement Data Collection with TenderNed
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte
Automate Google Ads copy optimization with Channable feed and Relevance AI
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. --- 🔗 Integrations Used Google Ads → Fetch campaign and ad performance metrics using GAQL. Relevance AI → Analyze performance data and regenerate ad copy using AI agents and tools. Channable → Pull updated product feeds for ad refresh cycles. Google Sheets → Save optimized ad copy for review and documentation. Slack → Send a 30-day performance report to your marketing team. --- 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | --- 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. --- ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLEADSAPI_VERSION | API version (e.g., v17). | | GOOGLEADSCUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCEAIAPI_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCEAGENTPERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCEKNOWLEDGESOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCETOOLADCOPYID | Relevance AI tool ID for generating ad copy. | | CHANNABLEAPIURL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLECOMPANYID | Your Channable company ID. | | CHANNABLEPROJECTID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLESHEETID | ID of your Google Sheet to store optimized ads. | | SLACKWEBHOOKURL | Slack Incoming Webhook URL for sending reports. | --- 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | --- 🧠 Example AI Insight Output json { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } --- 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | --- 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 ). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. --- ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. --- 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. --- 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 --- 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. --- 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. --- 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.
🎓 How to transform unstructured email data into structured format with AI agent
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. *✅ Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.