Back to Catalog

Identify competitor content gaps across ChatGPT, Perplexity & Gemini with SE Ranking

EugeneEugene
38 views
2/3/2026
Official Page

Zrzut ekranu 20251218 o 15.29.43.png

Who is this for

  • Marketing teams tracking AI SEO performance
  • Content strategists planning editorial calendars
  • SEO teams doing competitive intelligence

What this workflow does

Identify content opportunities by analyzing where competitors outrank you in AI search and traditional SEO.

What you'll get

  • AI visibility gaps across ChatGPT, Perplexity, and Gemini
  • Keyword gaps with search volume and difficulty
  • Competitor backlink authority metrics
  • Prioritized opportunities with HIGH/MEDIUM/LOW scoring
  • Actionable recommendations for each gap

How it works

  1. Fetches AI search visibility for your domain and competitor
  2. Compares metrics across ChatGPT, Perplexity, and Gemini
  3. Extracts competitor's top-performing prompts and keywords
  4. Analyzes competitor backlink authority
  5. Calculates opportunity scores and prioritizes gaps
  6. Exports ranked opportunities to Google Sheets

Requirements

  • Self-hosted n8n instance
  • SE Ranking community node installed
  • SE Ranking API token (Get one here)
  • Google Sheets account (optional)

Setup

  1. Install the SE Ranking community node
  2. Add your SE Ranking API credentials
  3. Update the Configuration node with your domain and competitor
  4. Connect Google Sheets for export (optional)

Customization

  • Change source for different regions (us, uk, de, fr, etc.)
  • Adjust volume/difficulty thresholds in Code nodes
  • Modify priority scoring weights

n8n Workflow: Basic Data Transformation and Merge Example

This n8n workflow demonstrates a fundamental pattern for processing and combining data. It starts with a manual trigger, performs a simple data transformation, and then uses a merge operation, although the merge itself isn't fully connected in this specific JSON.

What it does

This workflow showcases the following steps:

  1. Manual Trigger: The workflow is initiated manually, allowing for on-demand execution.
  2. Google Sheets (Placeholder): A Google Sheets node is present, indicating an intention to interact with Google Sheets data. In this definition, it's not configured to perform any specific read or write operation but serves as an example of a potential data source or destination.
  3. Edit Fields (Set): This node is used to modify or add fields to the incoming data. It's a versatile tool for data manipulation.
  4. Code: A Code node is included, allowing for custom JavaScript logic to be executed. This is useful for complex data transformations, API calls, or conditional logic not covered by standard nodes.
  5. Wait: A Wait node is present, which can be used to pause the workflow for a specified duration, useful for rate limiting or waiting for external processes.
  6. Merge: A Merge node is included, designed to combine data from multiple incoming branches into a single stream. While present, it's not fully connected to other nodes in this JSON, suggesting it's part of a larger, incomplete flow or an example of where merging would occur.

Prerequisites/Requirements

  • n8n Instance: You need a running n8n instance to import and execute this workflow.
  • Google Sheets Account: If the Google Sheets node were to be fully configured, a Google Sheets account and credentials would be required.

Setup/Usage

  1. Import the Workflow:
    • Copy the provided JSON code.
    • In your n8n instance, click on "Workflows" in the left sidebar.
    • Click "New" or the "+" icon to create a new workflow.
    • Click the three dots menu (...) in the top right and select "Import from JSON".
    • Paste the copied JSON and click "Import".
  2. Configure Nodes (if necessary):
    • The "Google Sheets" node is currently a placeholder. If you intend to use it, you would need to configure its operation (e.g., "Read Sheet", "Append Row") and provide Google Sheets credentials.
    • The "Edit Fields (Set)" and "Code" nodes are ready for custom logic to be added based on your specific data transformation needs.
    • The "Wait" node can be configured for a specific delay.
  3. Execute the Workflow:
    • Click the "Execute Workflow" button in the n8n editor (usually in the top right) to run the workflow manually. You can also activate the workflow to run on a schedule if a trigger node were configured.

Related Templates

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch 🔗 LinkedIn – Wessel Bulte

Wessel BulteBy Wessel Bulte
247

AI-powered code review with linting, red-marked corrections in Google Sheets & Slack

Advanced Code Review Automation (AI + Lint + Slack) Who’s it for For software engineers, QA teams, and tech leads who want to automate intelligent code reviews with both AI-driven suggestions and rule-based linting — all managed in Google Sheets with instant Slack summaries. How it works This workflow performs a two-layer review system: Lint Check: Runs a lightweight static analysis to find common issues (e.g., use of var, console.log, unbalanced braces). AI Review: Sends valid code to Gemini AI, which provides human-like review feedback with severity classification (Critical, Major, Minor) and visual highlights (red/orange tags). Formatter: Combines lint and AI results, calculating an overall score (0–10). Aggregator: Summarizes results for quick comparison. Google Sheets Writer: Appends results to your review log. Slack Notification: Posts a concise summary (e.g., number of issues and average score) to your team’s channel. How to set up Connect Google Sheets and Slack credentials in n8n. Replace placeholders (<YOURSPREADSHEETID>, <YOURSHEETGIDORNAME>, <YOURSLACKCHANNEL_ID>). Adjust the AI review prompt or lint rules as needed. Activate the workflow — reviews will start automatically whenever new code is added to the sheet. Requirements Google Sheets and Slack integrations enabled A configured AI node (Gemini, OpenAI, or compatible) Proper permissions to write to your target Google Sheet How to customize Add more linting rules (naming conventions, spacing, forbidden APIs) Extend the AI prompt for project-specific guidelines Customize the Slack message formatting Export analytics to a dashboard (e.g., Notion or Data Studio) Why it’s valuable This workflow brings realistic, team-oriented AI-assisted code review to n8n — combining the speed of automated linting with the nuance of human-style feedback. It saves time, improves code quality, and keeps your team’s review history transparent and centralized.

higashiyama By higashiyama
90

🎓 How to transform unstructured email data into structured format with AI agent

This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. *✅ Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
1616