Back to Catalog

Google Cloud natural language tool MCP server

David AshbyDavid Ashby
165 views
2/3/2026
Official Page

🛠️ Google Cloud Natural Language Tool MCP Server

Complete MCP server exposing all Google Cloud Natural Language Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built.

⚡ Quick Setup

Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free?

Join the community

  1. Import this workflow into your n8n instance
  2. Activate the workflow to start your MCP server
  3. Copy the webhook URL from the MCP trigger node
  4. Connect AI agents using the MCP URL

🔧 How it Works

MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Cloud Natural Language Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Cloud Natural Language Tool tool with full error handling

📋 Available Operations (1 total)

Every possible Google Cloud Natural Language Tool operation is included:

🔧 Document (1 operations)

Analyze sentiment

🤖 AI Integration

Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options

Response Format: Native Google Cloud Natural Language Tool API responses with full data structure

Error Handling: Built-in n8n error management and retry logic

💡 Usage Examples

Connect this MCP server to any AI agent or workflow:

Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints

✨ Benefits

Complete Coverage: Every Google Cloud Natural Language Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic

> 🆓 Free for community use! Ready to deploy in under 2 minutes.

Google Cloud Natural Language Tool - MCP Server

This n8n workflow acts as a Model Context Protocol (MCP) server, designed to integrate with Langchain-based applications. It serves as a trigger point for AI models that need to interact with external tools or services, specifically within the context of the Google Cloud Natural Language API (as hinted by the directory name, though not explicitly defined in the provided JSON).

What it does

This workflow primarily provides an entry point for Langchain applications.

  1. Listens for MCP Requests: The workflow starts with an "MCP Server Trigger" node, which listens for incoming requests following the Model Context Protocol. This means it's ready to receive instructions or data from a Langchain agent or application.
  2. Provides a Langchain Tool Endpoint: It exposes an endpoint that Langchain agents can call as a "tool." When an agent decides to use this tool, it sends a request to this MCP server.

Prerequisites/Requirements

  • n8n Instance: A running n8n instance to host this workflow.
  • Langchain Application: An external application built with Langchain that is configured to use this n8n workflow as a tool.
  • Google Cloud Natural Language API: (Implicit, based on directory name) While not configured in this basic JSON, the intended use case suggests that subsequent nodes in a complete workflow would interact with the Google Cloud Natural Language API. This would require:
    • A Google Cloud Project with the Natural Language API enabled.
    • Appropriate Google Cloud credentials (e.g., a Service Account key) configured in n8n.

Setup/Usage

  1. Import the Workflow:
    • Download the provided JSON content.
    • In your n8n instance, go to "Workflows" and click "New".
    • Click the "Import from JSON" button and paste the workflow JSON.
  2. Activate the Workflow: After importing, ensure the workflow is activated by toggling the "Active" switch in the top right corner.
  3. Configure Langchain Application:
    • In your Langchain application, configure a tool that points to the webhook URL exposed by the "MCP Server Trigger" node. You can find this URL by clicking on the "MCP Server Trigger" node and looking for the "Webhook URL" in its settings.
    • Your Langchain agent should be designed to call this tool when it needs to perform natural language processing tasks.

This workflow provides the foundational trigger for a Langchain-powered tool. To make it functional for natural language processing, you would extend it by adding nodes that interact with the Google Cloud Natural Language API (e.g., using the "Google Cloud Language" node) to process text and return results to the Langchain agent.

Related Templates

Generate song lyrics and music from text prompts using OpenAI and Fal.ai Minimax

Spark your creativity instantly in any chat—turn a simple prompt like "heartbreak ballad" into original, full-length lyrics and a professional AI-generated music track, all without leaving your conversation. 📋 What This Template Does This chat-triggered workflow harnesses AI to generate detailed, genre-matched song lyrics (at least 600 characters) from user messages, then queues them for music synthesis via Fal.ai's minimax-music model. It polls asynchronously until the track is ready, delivering lyrics and audio URL back in chat. Crafts original, structured lyrics with verses, choruses, and bridges using OpenAI Submits to Fal.ai for melody, instrumentation, and vocals aligned to the style Handles long-running generations with smart looping and status checks Returns complete song package (lyrics + audio link) for seamless sharing 🔧 Prerequisites n8n account (self-hosted or cloud with chat integration enabled) OpenAI account with API access for GPT models Fal.ai account for AI music generation 🔑 Required Credentials OpenAI API Setup Go to platform.openai.com → API keys (sidebar) Click "Create new secret key" → Name it (e.g., "n8n Songwriter") Copy the key and add to n8n as "OpenAI API" credential type Test by sending a simple chat completion request Fal.ai HTTP Header Auth Setup Sign up at fal.ai → Dashboard → API Keys Generate a new API key → Copy it In n8n, create "HTTP Header Auth" credential: Name="Fal.ai", Header Name="Authorization", Header Value="Key [Your API Key]" Test with a simple GET to their queue endpoint (e.g., /status) ⚙️ Configuration Steps Import the workflow JSON into your n8n instance Assign OpenAI API credentials to the "OpenAI Chat Model" node Assign Fal.ai HTTP Header Auth to the "Generate Music Track", "Check Generation Status", and "Fetch Final Result" nodes Activate the workflow—chat trigger will appear in your n8n chat interface Test by messaging: "Create an upbeat pop song about road trips" 🎯 Use Cases Content Creators: YouTubers generating custom jingles for videos on the fly, streamlining production from idea to audio export Educators: Music teachers using chat prompts to create era-specific folk tunes for classroom discussions, fostering interactive learning Gift Personalization: Friends crafting anniversary R&B tracks from shared memories via quick chats, delivering emotional audio surprises Artist Brainstorming: Songwriters prototyping hip-hop beats in real-time during sessions, accelerating collaboration and iteration ⚠️ Troubleshooting Invalid JSON from AI Agent: Ensure the system prompt stresses valid JSON; test the agent standalone with a sample query Music Generation Fails (401/403): Verify Fal.ai API key has minimax-music access; check usage quotas in dashboard Status Polling Loops Indefinitely: Bump wait time to 45-60s for complex tracks; inspect fal.ai queue logs for bottlenecks Lyrics Under 600 Characters: Tweak agent prompt to enforce fuller structures like [V1][C][V2][B][C]; verify output length in executions

Daniel NkenchoBy Daniel Nkencho
601

Auto-reply & create Linear tickets from Gmail with GPT-5, gotoHuman & human review

This workflow automatically classifies every new email from your linked mailbox, drafts a personalized reply, and creates Linear tickets for bugs or feature requests. It uses a human-in-the-loop with gotoHuman and continuously improves itself by learning from approved examples. How it works The workflow triggers on every new email from your linked mailbox. Self-learning Email Classifier: an AI model categorizes the email into defined categories (e.g., Bug Report, Feature Request, Sales Opportunity, etc.). It fetches previously approved classification examples from gotoHuman to refine decisions. Self-learning Email Writer: the AI drafts a reply to the email. It learns over time by using previously approved replies from gotoHuman, with per-classification context to tailor tone and style (e.g., different style for sales vs. bug reports). Human Review in gotoHuman: review the classification and the drafted reply. Drafts can be edited or retried. Approved values are used to train the self-learning agents. Send approved Reply: the approved response is sent as a reply to the email thread. Create ticket: if the classification is Bug or Feature Request, a ticket is created by another AI agent in Linear. Human Review in gotoHuman: How to set up Most importantly, install the gotoHuman node before importing this template! (Just add the node to a blank canvas before importing) Set up credentials for gotoHuman, OpenAI, your email provider (e.g. Gmail), and Linear. In gotoHuman, select and create the pre-built review template "Support email agent" or import the ID: 6fzuCJlFYJtlu9mGYcVT. Select this template in the gotoHuman node. In the "gotoHuman: Fetch approved examples" http nodes you need to add your formId. It is the ID of the review template that you just created/imported in gotoHuman. Requirements gotoHuman (human supervision, memory for self-learning) OpenAI (classification, drafting) Gmail or your preferred email provider (for email trigger+replies) Linear (ticketing) How to customize Expand or refine the categories used by the classifier. Update the prompt to reflect your own taxonomy. Filter fetched training data from gotoHuman by reviewer so the writer adapts to their personalized tone and preferences. Add more context to the AI email writer (calendar events, FAQs, product docs) to improve reply quality.

gotoHumanBy gotoHuman
353

Dynamic Hubspot lead routing with GPT-4 and Airtable sales team distribution

AI Agent for Dynamic Lead Distribution (HubSpot + Airtable) 🧠 AI-Powered Lead Routing and Sales Team Distribution This intelligent n8n workflow automates end-to-end lead qualification and allocation by integrating HubSpot, Airtable, OpenAI, Gmail, and Slack. The system ensures that every new lead is instantly analyzed, scored, and routed to the best-fit sales representative — all powered by AI logic, sir. --- 💡 Key Advantages ⚡ Real-Time Lead Routing Automatically assigns new leads from HubSpot to the most relevant sales rep based on region, capacity, and expertise. 🧠 AI Qualification Engine An OpenAI-powered Agent evaluates the lead’s industry, region, and needs to generate a persona summary and routing rationale. 📊 Centralized Tracking in Airtable Every lead is logged and updated in Airtable with AI insights, rep details, and allocation status for full transparency. 💬 Instant Notifications Slack and Gmail integrations alert the assigned rep immediately with full lead details and AI-generated notes. 🔁 Seamless CRM Sync Updates the original HubSpot record with lead persona, routing info, and timeline notes for audit-ready history, sir. --- ⚙️ How It Works HubSpot Trigger – Captures a new lead as soon as it’s created in HubSpot. Fetch Contact Data – Retrieves all relevant fields like name, company, and industry. Clean & Format Data – A Code node standardizes and structures the data for consistency. Airtable Record Creation – Logs the lead data into the “Leads” table for centralized tracking. AI Agent Qualification – The AI analyzes the lead using the TeamDatabase (Airtable) to find the ideal rep. Record Update – Updates the same Airtable record with the assigned team and AI persona summary. Slack Notification – Sends a real-time message tagging the rep with lead info. Gmail Notification – Sends a personalized handoff email with context and follow-up actions. HubSpot Sync – Updates the original contact in HubSpot with the assignment details and AI rationale, sir. --- 🛠️ Setup Steps Trigger Node: HubSpot → Detect new leads. HubSpot Node: Retrieve complete lead details. Code Node: Clean and normalize data. Airtable Node: Log lead info in the “Leads” table. AI Agent Node: Process lead and match with sales team. Slack Node: Notify the designated representative. Gmail Node: Email the rep with details. HubSpot Node: Update CRM with AI summary and allocation status, sir. --- 🔐 Credentials Required HubSpot OAuth2 API – To fetch and update leads. Airtable Personal Access Token – To store and update lead data. OpenAI API – To power the AI qualification and matching logic. Slack OAuth2 – For sending team notifications. Gmail OAuth2 – For automatic email alerts to assigned reps, sir. --- 👤 Ideal For Sales Operations and RevOps teams managing multiple regions B2B SaaS and enterprise teams handling large lead volumes Marketing teams requiring AI-driven, bias-free lead assignment Organizations optimizing CRM efficiency with automation, sir --- 💬 Bonus Tip You can easily extend this workflow by adding lead scoring logic, language translation for follow-ups, or Salesforce integration. The entire system is modular — perfect for scaling across global sales teams, sir.

MANISH KUMARBy MANISH KUMAR
113