Back to Catalog

Ai-powered multi-stage web search and research suite

Mind-FrontMind-Front
706 views
2/3/2026
Official Page

Description:

The closest definition to this workflow is a cheaper Modular Version of Perplexity online API empowered by LLM models that outperform the Perplexity Lama Model. This flow provides a seamless way to conduct detailed web searches, extract data, and generate insightful reports based on real-time information. It provides a webhook-based flow that gets any search question and reports back the results via a multi-level web search analysis and domain-specific emulation of an agent to deliver an unbiased expert report. This Flow is Ideal for market research, competitive analysis, or any scenario where actionable, structured insights are needed.

A more complete, step-by-step guide is provided within the workflow, ensuring you have all the details to set up and customize each component. This tool is designed to function similarly to Perplexity by performing semantic search, reranking, and follow-up queries. However, it offers a unique advantage—complete customization at every stage. Modify any part of the process, from query refinement to data extraction, allowing you to tailor the workflow to your specific needs.

Key Features:

  • AI-Powered Query Generation and Expert Emulation: Uses Google Gemini to transform user queries into expert-level searches, providing accurate and context-aware results.
  • Dual-Stage Semantic Search with Intelligent Reranking: Performs an initial search, reranks results, and refines the query based on findings to conduct a second, more targeted search.
  • Top-Result Data Extraction: Extracts content from the top three results of each search, capturing relevant insights from six total sources.
  • Customizable API Options: Pre-configured with free APIs (Google Gemini, DuckDuckGo, and Article Extraction APIs) but easily adaptable to other APIs if preferred.
  • Automated, Insightful Reporting: Synthesizes data into a cohesive report, providing expert-level insights tailored to the user’s query.

Instructions for API Setup:

This workflow is designed to work with free-tier APIs, offering a cost-effective way to retrieve high-quality data. Here’s how to set up each API, with detailed instructions included in the workflow:

  1. Google Gemini API (for Query Generation and Analysis):

    • Visit Google AI Studio and log in.
    • Create a free API key under "Get API Key" → "Create API Key in New Project." The free tier includes up to 15 requests per minute, 1 million tokens per minute, and up to 1,500 requests per day.
  2. Brave Search API (for Web Search): To attain the free web search API tier from Brave, follow these steps:

  • Visit api.search.brave.com
  • Create an account
  • Subscribe to the free plan (no charge)
  • Navigate to the API Keys section
  • Generate an API key. For the subscription type, choose "Free".
  1. Article Extraction API (for Content Extraction):
    • Register on RapidAPI.com and subscribe to the Article Extraction API.
    • The free plan allows up to 300 extractions per month. Enter your API key in each of the 6 extraction nodes for content retrieval.
    • Alternative: In the workflow, we have provided the full instructions on how to replace the current flow with alternative API Keys and provided suggestions such as Scraper Tech API.

Additional Tip: To use other APIs, you can generate a cURL request in RapidAPI’s playground, and then paste it into the HTTP Request node in n8n. This approach streamlines integration by automatically filling in headers and request details.

Why Choose This Workflow?

The Intelligent Online Web Researcher offers an all-in-one solution for complex, customizable online research. Unlike other tools that provide automated semantic search, this workflow is fully modifiable, allowing you to tailor each step, from the initial query and reranking to data extraction and reporting. With built-in instructions and a structure that’s easy to adapt, it’s ideal for commercial applications that require real-time, high-quality insights.

Tags: Online Research, Web Search, Market Analysis, Web Search Automation, Data Extraction, Semantic Search, API Integration, Competitive Intelligence, Business Intelligence, Real-Time Reporting, Web Scrape, Data Crawler, Perplexity

AI-Powered Multi-Stage Web Search and Research Suite

This n8n workflow provides a robust, AI-powered multi-stage web search and research suite. It leverages a webhook trigger to initiate complex research tasks, dynamically fetches information using HTTP requests, processes data with a Large Language Model (LLM) chain, and structures the output for clear, actionable results. The workflow is designed to handle multiple research queries in batches, incorporating delays to manage API rate limits or ensure data availability.

What it does

This workflow automates the following steps:

  1. Receives Research Requests: It starts by listening for incoming data via a webhook, which typically contains a list of research queries.
  2. Batches Queries: It splits the incoming research queries into manageable batches, allowing for processing multiple items sequentially or with controlled concurrency.
  3. Performs Web Searches (Placeholder): For each item in a batch, it makes an HTTP request. This node is intended to interact with a web search API (e.g., Google Search API, SERP API) to fetch raw search results based on the query.
  4. Introduces Delays: After each batch of web searches, it pauses for a configurable duration. This is crucial for respecting API rate limits and preventing service overload.
  5. Processes with LLM: The raw search results are then fed into a Basic LLM Chain powered by a Google Gemini Chat Model. This chain is responsible for analyzing the search results, extracting relevant information, synthesizing insights, and performing the core "research" aspect of the workflow.
  6. Structures Output: An Auto-fixing Output Parser and a Structured Output Parser are used to ensure the LLM's output is consistently formatted into a predefined structure (e.g., JSON), even if the LLM's initial response is slightly malformed. This ensures downstream systems can reliably consume the research findings.
  7. Responds to Trigger: Finally, the workflow responds to the initial webhook trigger with the structured, AI-processed research results.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance (self-hosted or cloud).
  • Webhook Integration: An external system or application configured to send POST requests to the n8n webhook URL.
  • Google Gemini API Key: Credentials for the Google Gemini Chat Model to power the LLM chain. This requires configuring a credential in n8n.
  • Web Search API (External): While the workflow includes an HTTP Request node for web search, you will need to configure it to connect to your preferred web search API (e.g., Google Custom Search API, SerpApi, etc.) and provide the necessary API keys/credentials.

Setup/Usage

  1. Import the Workflow:
    • Copy the provided JSON code.
    • In your n8n instance, go to "Workflows".
    • Click "New" or "Import from JSON" and paste the workflow JSON.
  2. Configure Credentials:
    • Locate the Google Gemini Chat Model node.
    • Click on the "Credential" field and select an existing Google Gemini credential or create a new one, providing your API key.
    • Locate the HTTP Request node (intended for web search). Configure its URL, headers, and body to interact with your chosen web search API. You may need to add a new credential if your API requires authentication.
  3. Activate the Webhook:
    • The Webhook node will display a unique URL once the workflow is saved and activated.
    • Configure your external system to send POST requests to this URL with your research queries (e.g., an array of strings or objects containing a query field).
  4. Customize Workflow Logic (Optional):
    • HTTP Request (Web Search): Adjust the HTTP Request node to match the specific API endpoint and parameters of your chosen web search engine. You'll likely need to extract the search query from the incoming webhook data.
    • Loop Over Items: Modify the "Batch Size" and "Delay" in the Loop Over Items (Split in Batches) and Wait nodes to optimize performance and respect API rate limits.
    • Basic LLM Chain: Customize the prompt in the Basic LLM Chain to guide the Google Gemini model on how to process the search results and what kind of output to generate.
    • Structured Output Parser: If your desired output structure changes, update the schema in the Structured Output Parser node.
  5. Activate the Workflow: Toggle the workflow to "Active" in the n8n editor.
  6. Test: Send a test request to the webhook URL from your external system to ensure the workflow executes as expected.

Related Templates

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Auto-reply & create Linear tickets from Gmail with GPT-5, gotoHuman & human review

This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.

gotoHumanBy gotoHuman
353

Dynamic Hubspot lead routing with GPT-4 and Airtable sales team distribution

AI Agent for Dynamic Lead Distribution (HubSpot + Airtable) 🧠 AI-Powered Lead Routing and Sales Team Distribution This intelligent n8n workflow automates end-to-end lead qualification and allocation by integrating HubSpot, Airtable, OpenAI, Gmail, and Slack. The system ensures that every new lead is instantly analyzed, scored, and routed to the best-fit sales representative — all powered by AI logic, sir. --- 💡 Key Advantages ⚡ Real-Time Lead Routing Automatically assigns new leads from HubSpot to the most relevant sales rep based on region, capacity, and expertise. 🧠 AI Qualification Engine An OpenAI-powered Agent evaluates the lead’s industry, region, and needs to generate a persona summary and routing rationale. 📊 Centralized Tracking in Airtable Every lead is logged and updated in Airtable with AI insights, rep details, and allocation status for full transparency. 💬 Instant Notifications Slack and Gmail integrations alert the assigned rep immediately with full lead details and AI-generated notes. 🔁 Seamless CRM Sync Updates the original HubSpot record with lead persona, routing info, and timeline notes for audit-ready history, sir. --- ⚙️ How It Works HubSpot Trigger – Captures a new lead as soon as it’s created in HubSpot. Fetch Contact Data – Retrieves all relevant fields like name, company, and industry. Clean & Format Data – A Code node standardizes and structures the data for consistency. Airtable Record Creation – Logs the lead data into the “Leads” table for centralized tracking. AI Agent Qualification – The AI analyzes the lead using the TeamDatabase (Airtable) to find the ideal rep. Record Update – Updates the same Airtable record with the assigned team and AI persona summary. Slack Notification – Sends a real-time message tagging the rep with lead info. Gmail Notification – Sends a personalized handoff email with context and follow-up actions. HubSpot Sync – Updates the original contact in HubSpot with the assignment details and AI rationale, sir. --- 🛠️ Setup Steps Trigger Node: HubSpot → Detect new leads. HubSpot Node: Retrieve complete lead details. Code Node: Clean and normalize data. Airtable Node: Log lead info in the “Leads” table. AI Agent Node: Process lead and match with sales team. Slack Node: Notify the designated representative. Gmail Node: Email the rep with details. HubSpot Node: Update CRM with AI summary and allocation status, sir. --- 🔐 Credentials Required HubSpot OAuth2 API – To fetch and update leads. Airtable Personal Access Token – To store and update lead data. OpenAI API – To power the AI qualification and matching logic. Slack OAuth2 – For sending team notifications. Gmail OAuth2 – For automatic email alerts to assigned reps, sir. --- 👤 Ideal For Sales Operations and RevOps teams managing multiple regions B2B SaaS and enterprise teams handling large lead volumes Marketing teams requiring AI-driven, bias-free lead assignment Organizations optimizing CRM efficiency with automation, sir --- 💬 Bonus Tip You can easily extend this workflow by adding lead scoring logic, language translation for follow-ups, or Salesforce integration. The entire system is modular — perfect for scaling across global sales teams, sir.

MANISH KUMARBy MANISH KUMAR
113