Back to Catalog

Ai-powered local event finder with multi-tool search

JezJez
784 views
2/3/2026
Official Page

Summary

This n8n workflow implements an AI-powered "Local Event Finder" agent. It takes user criteria (like event type, city, date, and interests), uses a suite of search tools (Brave Web Search, Brave Local Search, Google Gemini Search) and a web scraper (Jina AI) to find relevant events, and returns formatted details. The entire agent is exposed as a single, easy-to-use MCP (Multi-Capability Peer) tool, making it simple to integrate into other workflows or applications.

This template cleverly combines the MCP server endpoint and the AI agent logic into a single n8n workflow file for ease of import and management.

Key Features

  • Intelligent Multi-Tool Search: Dynamically utilizes web search, precise local search, and advanced Gemini semantic search to find events.
  • Detailed Information via Web Scraping: Employs Jina AI to extract comprehensive details directly from event web pages.
  • Simplified MCP Tool Exposure: Makes the complex event-finding logic available as a single, callable tool for other MCP-compatible clients (e.g., Roo Code, Cline, other n8n workflows).
  • Customizable AI Behavior: The core AI agent's behavior, tool usage strategy, and output formatting can be tailored by modifying its System Prompt.
  • Modular Design: Uses distinct nodes for LLM, memory, and each external tool, allowing for easier modification or extension.

Benefits

  • Simplifies Client-Side Integration: Offloads the complexity of event searching and data extraction from client applications.
  • Provides Richer Event Data: Goes beyond simple search links to extract and format key event details.
  • Flexible & Adaptable: Can be adjusted to various event search needs and can incorporate new tools or data sources.
  • Efficient Processing: Leverages specialized tools for different aspects of the search process.

Nodes Used

  • MCP Trigger
  • Tool Workflow
  • Execute Workflow Trigger
  • AI Agent
  • Google Gemini Chat Model (ChatGoogleGenerativeAI)
  • Simple Memory (Window Buffer Memory)
  • MCP Client (for Brave Search tools via Smithery)
  • Google Gemini Search Tool
  • Jina AI Tool

Prerequisites

  • An active n8n instance.
  • Google AI API Key: For the Gemini LLM (Google Gemini Chat Model node) and the Google Gemini Search Tool. Ensure your key is enabled for these services.
  • Jina AI API Key: For the jina_ai_web_page_scraper node. A free tier is often available.
  • Access to a Brave Search MCP Provider (Optional but Recommended):
    • This template uses MCP Client nodes configured for Brave Search via a provider like Smithery. You'll need an account/API key for your chosen Brave Search MCP provider to configure the smithery brave search credential.
    • Alternatively, you could adapt these to call Brave Search API directly if you manage your own access, or replace them with other search tools.

Setup Instructions

  1. Import Workflow: Download the JSON file for this template and import it into your n8n instance.
  2. Configure Credentials:
    • Google Gemini LLM:
      • Locate the Google Gemini Chat Model node.
      • Select or create a "Google Gemini API" credential (named Google Gemini Context7 in the template) using your Google AI API Key.
    • Google Gemini Search Tool:
      • Locate the google_gemini_event_search node.
      • Select or create a "Gemini API" credential (named Gemini Credentials account in the template) using your Google AI API Key (ensure it's enabled for Search/Vertex AI).
    • Jina AI Web Scraper:
      • Locate the jina_ai_web_page_scraper node.
      • Select or create a "Jina AI API" credential (named Jina AI account in the template) using your Jina AI API Key.
    • Brave Search (via MCP):
      • You'll need an MCP Client HTTP API credential to connect to your Brave Search MCP provider (e.g., Smithery).
      • Create a new "MCP Client HTTP API" credential in n8n. Name it, for example, smithery brave search.
      • Configure it with the Base URL and any required authentication (e.g., API key in headers) for your Brave Search MCP provider.
      • Locate the brave_web_search and brave_local_search MCP Client nodes in the workflow.
      • Assign the smithery brave search (or your named credential) to both of these nodes.
  3. Activate Workflow: Ensure the workflow is active.
  4. Note MCP Trigger Path:
    • Locate the local_event_finder (MCP Trigger) node.
    • The Path field (e.g., 0ca88864-ec0a-4c27-a7ec-e28c5a900697) combined with your n8n webhook base URL forms the endpoint for client calls.
    • Example Endpoint: YOUR_N8N_INSTANCE_URL/webhooks/PATH-TO-MCP-SERVER

Customization

  • AI Behavior: Modify the "System Message" parameter within the event_finder_agent node to change the AI's persona, its strategy for using tools, or the desired output format.
  • LLM Model: Swap the Google Gemini Chat Model node with another compatible LLM node (e.g., OpenAI Chat Model) if desired. You'll need to adjust credentials and potentially the system prompt.
  • Tools: Add, remove, or replace tool nodes (e.g., use a different search provider, add a weather API tool) and update the event_finder_agent's system prompt and tool configuration accordingly.
  • Scraping Depth: Be mindful of the jina_ai_web_page_scraper's usage due to potential timeouts. The system prompt already guides the LLM on this, but you can adjust its usage instructions.

AI-Powered Local Event Finder with Multi-Tool Search

This n8n workflow leverages AI agents and a multi-tool search capability to intelligently find local events based on user queries. It acts as an AI server, responding to requests from other workflows or applications.

What it does

This workflow sets up an AI Agent that can understand natural language queries and use various tools to fulfill requests. Here's a breakdown of the steps:

  1. Listens for Requests: The workflow is triggered by an "MCP Server Trigger" or an "Execute Workflow Trigger", indicating it's designed to be called by another n8n workflow or an external application using the Model Context Protocol (MCP).
  2. Initializes AI Agent: An "AI Agent" node is set up to process the incoming user query. This agent is configured with a "Google Gemini Chat Model" as its language model and a "Simple Memory" to maintain conversational context.
  3. Provides a Search Tool: The AI Agent is equipped with a "Call n8n Workflow Tool". This tool allows the AI to execute other n8n workflows, effectively enabling it to perform complex searches or actions by delegating tasks to specialized sub-workflows. This is where the "multi-tool search" aspect comes into play, as the AI can decide which specific workflow to call based on the user's request.
  4. Processes and Responds: The AI Agent uses its language model and available tools to understand the user's intent, perform the necessary "searches" (by calling other workflows), and formulate a relevant response.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running n8n instance to import and execute the workflow.
  • Google Gemini API Key: For the "Google Gemini Chat Model" to function, you'll need access to the Google Gemini API and a configured credential in n8n.
  • Other n8n Workflows (for tools): The "Call n8n Workflow Tool" implies that there are other n8n workflows that this AI agent will call upon to perform specific search functions (e.g., a workflow to search for events on Eventbrite, another for local news, etc.). These sub-workflows would need to be created and configured separately.

Setup/Usage

  1. Import the workflow: Download the JSON provided and import it into your n8n instance.
  2. Configure Credentials:
    • Set up your Google Gemini API Key as a credential in n8n and select it in the "Google Gemini Chat Model" node.
  3. Configure the Call n8n Workflow Tool:
    • In the "Call n8n Workflow Tool" node, specify the n8n workflow(s) that the AI agent should be able to call. You will need to provide the workflow ID(s) and potentially define the expected input/output for these sub-workflows.
  4. Activate the workflow: Ensure the workflow is active in n8n.
  5. Trigger the workflow: This workflow is designed to be triggered by other workflows or applications.
    • If using the "MCP Server Trigger", you would send requests to the n8n instance's MCP endpoint.
    • If using the "Execute Workflow Trigger", another n8n workflow would need to call this one using an "Execute Workflow" node.
  6. Test: Send a test query to the workflow (via MCP or another workflow) to see the AI agent in action. For example, "Find local tech meetups next week" or "What concerts are happening this weekend in Berlin?".

Related Templates

Track competitor SEO keywords with Decodo + GPT-4.1-mini + Google Sheets

This workflow automates competitor keyword research using OpenAI LLM and Decodo for intelligent web scraping. Who this is for SEO specialists, content strategists, and growth marketers who want to automate keyword research and competitive intelligence. Marketing analysts managing multiple clients or websites who need consistent SEO tracking without manual data pulls. Agencies or automation engineers using Google Sheets as an SEO data dashboard for keyword monitoring and reporting. What problem this workflow solves Tracking competitor keywords manually is slow and inconsistent. Most SEO tools provide limited API access or lack contextual keyword analysis. This workflow solves that by: Automatically scraping any competitor’s webpage with Decodo. Using OpenAI GPT-4.1-mini to interpret keyword intent, density, and semantic focus. Storing structured keyword insights directly in Google Sheets for ongoing tracking and trend analysis. What this workflow does Trigger — Manually start the workflow or schedule it to run periodically. Input Setup — Define the website URL and target country (e.g., https://dev.to, france). Data Scraping (Decodo) — Fetch competitor web content and metadata. Keyword Analysis (OpenAI GPT-4.1-mini) Extract primary and secondary keywords. Identify focus topics and semantic entities. Generate a keyword density summary and SEO strength score. Recommend optimization and internal linking opportunities. Data Structuring — Clean and convert GPT output into JSON format. Data Storage (Google Sheets) — Append structured keyword data to a Google Sheet for long-term tracking. Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n account with workflow editor access Decodo API credentials OpenAI API key Google Sheets account connected via OAuth2 Make sure to install the Decodo Community node. Create a Google Sheet Add columns for: primarykeywords, seostrengthscore, keyworddensity_summary, etc. Share with your n8n Google account. Connect Credentials Add credentials for: Decodo API credentials - You need to register, login and obtain the Basic Authentication Token via Decodo Dashboard OpenAI API (for GPT-4o-mini) Google Sheets OAuth2 Configure Input Fields Edit the “Set Input Fields” node to set your target site and region. Run the Workflow Click Execute Workflow in n8n. View structured results in your connected Google Sheet. How to customize this workflow Track Multiple Competitors → Use a Google Sheet or CSV list of URLs; loop through them using the Split In Batches node. Add Language Detection → Add a Gemini or GPT node before keyword analysis to detect content language and adjust prompts. Enhance the SEO Report → Expand the GPT prompt to include backlink insights, metadata optimization, or readability checks. Integrate Visualization → Connect your Google Sheet to Looker Studio for SEO performance dashboards. Schedule Auto-Runs → Use the Cron Node to run weekly or monthly for competitor keyword refreshes. Summary This workflow automates competitor keyword research using: Decodo for intelligent web scraping OpenAI GPT-4.1-mini for keyword and SEO analysis Google Sheets for live tracking and reporting It’s a complete AI-powered SEO intelligence pipeline ideal for teams that want actionable insights on keyword gaps, optimization opportunities, and content focus trends, without relying on expensive SEO SaaS tools.

Ranjan DailataBy Ranjan Dailata
161

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Auto-reply & create Linear tickets from Gmail with GPT-5, gotoHuman & human review

This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.

gotoHumanBy gotoHuman
353