Back to Catalog

Filter cybersecurity news for your tech stack (OpenAI + Pinecone RAG)

Will CarlsonWill Carlson
1947 views
2/3/2026
Official Page

What it does:

Collects cybersecurity news from trusted RSS feeds and uses OpenAI’s Retrieval-Augmented Generation (RAG) capabilities with Pinecone to filter for content that is directly relevant to your organization’s tech stack. “Relevant” means the AI looks for news items that mention your specific tools, vendors, frameworks, cloud platforms, programming languages, operating systems, or security solutions — as described in your .txt scope documents. By training on these documents, the system understands the environment you operate in and can prioritize news that could affect your security posture, compliance, or operational stability. Once filtered, summaries of the most important items are sent to your work email every day.

How it works

  • Pulls in news from multiple cybersecurity-focused RSS feeds: The workflow automatically collects articles from trusted, high-signal security news sources. These feeds cover threat intelligence, vulnerability disclosures, vendor advisories, and industry updates.
  • Filters articles for recency and direct connection to your documented tech stack: Using the publish date, it removes stale or outdated content. Then, leveraging your .txt scope documents stored in Pinecone, it checks each article for references to your technologies, vendors, platforms, or security tools.
  • Uses OpenAI to generate and review concise summaries: For each relevant article, OpenAI creates a short, clear summary of the key points. The AI also evaluates whether the article provides actionable or critical information before passing it through.
  • Trains on your scope using Pinecone Vector Store (free) for context-aware filtering: Your scope documents are embedded into a vector store so the AI can “remember” your environment. This context ensures the filtering process understands indirect or non-obvious connections to your tech stack.
  • Aggregates and sends only the most critical items to your work email: The system compiles the highest-priority news items into one daily digest, so you can review key developments without wading through irrelevant stories.

What you need to do:

  1. Setup your OpenAI and Pinecone credentials in the workflow
  2. Create and configure a Pinecone index (dimension 1536 recommended)
    1. Pinecone is free to setup.
    2. Setup Pinecone with a single free index.
    3. Use a namespace like: scope.
    4. Make sure the embedding model is the same for all of your Pinecone references.
  3. Submit .txt scope documents listing your technologies, vendors, platforms, frameworks, and security products.
    1. .txt does not need to be structured.
    2. Add as much detail as possible.
  4. Update AI prompts to accurately describe your company’s environment and priorities.

Filter Cybersecurity News for Your Tech Stack (OpenAI & Pinecone RAG)

This n8n workflow helps you stay updated on cybersecurity news relevant to your specific tech stack. It leverages AI (OpenAI) for intelligent filtering and a vector database (Pinecone) for efficient retrieval-augmented generation (RAG).

What it does

This workflow automates the process of:

  1. Triggering: It can be triggered either by a scheduled interval (e.g., daily) or manually via a form submission.
  2. Fetching News: Reads the latest articles from an RSS feed (likely a cybersecurity news source, though the specific URL is not in the JSON).
  3. Processing Articles: Iterates through each news article.
  4. Loading Documents: Prepares the article content for AI processing using a default data loader.
  5. Generating Embeddings: Converts the article content into numerical representations (embeddings) using OpenAI's embedding model.
  6. Storing in Vector Database: Stores these embeddings in a Pinecone vector store, making them searchable by similarity.
  7. AI-Powered Filtering (RAG): Uses an OpenAI Chat Model and a Simple Memory to act as an AI Agent. This agent likely queries the Pinecone vector store with your defined tech stack to retrieve relevant articles.
  8. Conditional Routing: Based on the AI Agent's output, it determines if an article is relevant.
  9. Sending Notifications: If an article is deemed relevant, it sends an email notification via Gmail.
  10. Merging Results: Combines the processed items.
  11. Aggregating Data: Gathers and structures the final output.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance.
  • OpenAI API Key: For the OpenAI Chat Model and Embeddings OpenAI nodes.
  • Pinecone API Key & Environment: For the Pinecone Vector Store node.
  • Gmail Account: Configured as a credential in n8n for sending email notifications.
  • RSS Feed URL: The URL of the cybersecurity news RSS feed you wish to monitor.
  • Defined Tech Stack: Your specific tech stack details will need to be configured within the AI Agent node's prompt or tools to enable relevant filtering.

Setup/Usage

  1. Import the Workflow: Download the JSON file and import it into your n8n instance.
  2. Configure Credentials:
    • Set up your OpenAI API Key credential.
    • Set up your Pinecone API Key and Environment credential.
    • Set up your Gmail credential.
  3. Configure RSS Feed: In the "RSS Read" node, update the URL to your desired cybersecurity news RSS feed.
  4. Customize AI Agent:
    • In the "AI Agent" node, you will need to define the prompt that guides the AI on what constitutes "relevant" news based on your tech stack. This is where you specify your technologies (e.g., "Kubernetes vulnerabilities," "AWS security updates," "Python library exploits").
    • Ensure the "Pinecone Vector Store" and "Embeddings OpenAI" nodes are correctly linked as tools/components for the AI Agent.
    • The "Simple Memory" node will help the AI Agent maintain context during its processing.
  5. Configure Gmail Notification: In the "Gmail" node, specify the recipient email address, subject, and body for the notifications. You can use expressions to include details from the filtered news articles.
  6. Choose Trigger:
    • Scheduled Trigger: If you want the workflow to run automatically, configure the "Schedule Trigger" node with your desired interval (e.g., every 24 hours).
    • Form Trigger: If you prefer to trigger it manually or via an external system, use the "n8n Form Trigger" node.
  7. Activate the Workflow: Once configured, activate the workflow in n8n.

Related Templates

AI-powered code review with linting, red-marked corrections in Google Sheets & Slack

Advanced Code Review Automation (AI + Lint + Slack) Who’s it for For software engineers, QA teams, and tech leads who want to automate intelligent code reviews with both AI-driven suggestions and rule-based linting — all managed in Google Sheets with instant Slack summaries. How it works This workflow performs a two-layer review system: Lint Check: Runs a lightweight static analysis to find common issues (e.g., use of var, console.log, unbalanced braces). AI Review: Sends valid code to Gemini AI, which provides human-like review feedback with severity classification (Critical, Major, Minor) and visual highlights (red/orange tags). Formatter: Combines lint and AI results, calculating an overall score (0–10). Aggregator: Summarizes results for quick comparison. Google Sheets Writer: Appends results to your review log. Slack Notification: Posts a concise summary (e.g., number of issues and average score) to your team’s channel. How to set up Connect Google Sheets and Slack credentials in n8n. Replace placeholders (<YOURSPREADSHEETID>, <YOURSHEETGIDORNAME>, <YOURSLACKCHANNEL_ID>). Adjust the AI review prompt or lint rules as needed. Activate the workflow — reviews will start automatically whenever new code is added to the sheet. Requirements Google Sheets and Slack integrations enabled A configured AI node (Gemini, OpenAI, or compatible) Proper permissions to write to your target Google Sheet How to customize Add more linting rules (naming conventions, spacing, forbidden APIs) Extend the AI prompt for project-specific guidelines Customize the Slack message formatting Export analytics to a dashboard (e.g., Notion or Data Studio) Why it’s valuable This workflow brings realistic, team-oriented AI-assisted code review to n8n — combining the speed of automated linting with the nuance of human-style feedback. It saves time, improves code quality, and keeps your team’s review history transparent and centralized.

higashiyama By higashiyama
90

Generate Weather-Based Date Itineraries with Google Places, OpenRouter AI, and Slack

🧩 What this template does This workflow builds a 120-minute local date course around your starting point by querying Google Places for nearby spots, selecting the top candidates, fetching real-time weather data, letting an AI generate a matching emoji, and drafting a friendly itinerary summary with an LLM in both English and Japanese. It then posts the full bilingual plan with a walking route link and weather emoji to Slack. 👥 Who it’s for Makers and teams who want a plug-and-play bilingual local itinerary generator with weather awareness — no custom code required. ⚙️ How it works Trigger – Manual (or schedule/webhook). Discovery – Google Places nearby search within a configurable radius. Selection – Rank by rating and pick the top 3. Weather – Fetch current weather (via OpenWeatherMap). Emoji – Use an AI model to match the weather with an emoji 🌤️. Planning – An LLM writes the itinerary in Markdown (JP + EN). Route – Compose a Google Maps walking route URL. Share – Post the bilingual itinerary, route link, and weather emoji to Slack. 🧰 Requirements n8n (Cloud or self-hosted) Google Maps Platform (Places API) OpenWeatherMap API key Slack Bot (chat:write) LLM provider (e.g., OpenRouter or DeepL for translation) 🚀 Setup (quick) Open Set → Fields: Config and fill in coords/radius/time limit. Connect Credentials for Google, OpenWeatherMap, Slack, and your LLM. Test the workflow and confirm the bilingual plan + weather emoji appear in Slack. 🛠 Customize Adjust ranking filters (type, min rating). Modify translation settings (target language or tone). Change output layout (side-by-side vs separated). Tune emoji logic or travel mode. Add error handling, retries, or logging for production use.

nodaBy noda
52

AI-powered document search with Oracle and ONNX embeddings for recruiting

How it works Create a user for doing Hybrid Search. Clear Existing Data, if present. Add Documents into the table. Create a hybrid index. Run Semantic search on the Documents table for "prioritize teamwork and leadership experience". Run Hybrid search for the text input in the Chat interface on the Documents table. Setup Steps Download the ONNX model allMiniLML12v2augmented.zip Extract the ZIP file on the database server into a directory, for example /opt/oracle/onnx. After extraction, the folder contents should look like: bash bash-4.4$ pwd /opt/oracle/onnx bash-4.4$ ls allMiniLML12_v2.onnx Connect as SYSDBA and create the DBA user sql -- Create DBA user CREATE USER app_admin IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; -- Grant privileges GRANT DBA TO app_admin; GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE TO app_admin; Create n8n Oracle DB credentials hybridsearchuser → for hybrid search operations dbadocuser → for DBA setup (user and tablespace creation) Run the workflow Click the manual Trigger It displays Pure semantic search results. Enter search text in Chat interface It displays results for vector and keyword search. Note The workflow currently creates the hybrid search user, docuser with the password visible in plain text inside the n8n Execute SQL node. For better security, consider performing the user creation manually outside n8n. Oracle 23ai or 26ai Database has to be used. Reference Hybrid Search End-End Example

sudarshanBy sudarshan
211