Complete Lyft API integration for AI agents with 16 operations using MCP
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator?
Complete MCP server exposing 16 Lyft API operations to AI agents.
β‘ Quick Setup
- Import this workflow into your n8n instance
- Credentials Add Lyft credentials
- Activate the workflow to start your MCP server
- Copy the webhook URL from the MCP trigger node
- Connect AI agents using the MCP URL
π§ How it Works
This workflow converts the Lyft API into an MCP-compatible interface for AI agents.
β’ MCP Trigger: Serves as your server endpoint for AI agent requests
β’ HTTP Request Nodes: Handle API calls to https://api.lyft.com/v1
β’ AI Expressions: Automatically populate parameters via $fromAI() placeholders
β’ Native Integration: Returns responses directly to the AI agent
π Available Operations (16 total)
π§ Cost (1 endpoints)
β’ GET /cost: Retrieve Cost Estimate
π§ Drivers (1 endpoints)
β’ GET /drivers: List Nearby Drivers
π§ Eta (1 endpoints)
β’ GET /eta: Retrieve Pickup ETA
π§ Profile (1 endpoints)
β’ GET /profile: Retrieve User Profile
π§ Rides (7 endpoints)
β’ GET /rides: Update Sandbox Ride Status β’ POST /rides: Request a Lyft β’ GET /rides/{id}: Get the ride detail of a given ride ID β’ POST /rides/{id}/cancel: Cancel a ongoing requested ride β’ PUT /rides/{id}/destination: Update the destination of the ride β’ PUT /rides/{id}/rating: Add the passenger's rating, feedback, and tip β’ GET /rides/{id}/receipt: Get the receipt of the rides.
π§ Ridetypes (1 endpoints)
β’ GET /ridetypes: Update Driver Availability
π§ Sandbox (4 endpoints)
β’ PUT /sandbox/primetime: Set Prime Time Percentage β’ PUT /sandbox/rides/{id}: Propagate ride through ride status β’ PUT /sandbox/ridetypes: Preset types of rides for sandbox β’ PUT /sandbox/ridetypes/{ride_type}: Driver availability for processing ride request
π€ AI Integration
Parameter Handling: AI agents automatically provide values for: β’ Path parameters and identifiers β’ Query parameters and filters β’ Request body data β’ Headers and authentication
Response Format: Native Lyft API responses with full data structure
Error Handling: Built-in n8n HTTP request error management
π‘ Usage Examples
Connect this MCP server to any AI agent or workflow:
β’ Claude Desktop: Add MCP server URL to configuration β’ Cursor: Add MCP server SSE URL to configuration β’ Custom AI Apps: Use MCP URL as tool endpoint β’ API Integration: Direct HTTP calls to MCP endpoints
β¨ Benefits
β’ Zero Setup: No parameter mapping or configuration needed
β’ AI-Ready: Built-in $fromAI() expressions for all parameters
β’ Production Ready: Native n8n HTTP request handling and logging
β’ Extensible: Easily modify or add custom logic
> π Free for community use! Ready to deploy in under 2 minutes.
n8n MCP Server Trigger Workflow
This n8n workflow demonstrates a basic setup using the Model Context Protocol (MCP) Server Trigger. It serves as a foundational template for building AI agent integrations, allowing external AI agents to initiate workflows and interact with n8n.
What it does
This workflow contains a single node:
- MCP Server Trigger: This node acts as an entry point for AI agents or other external systems that communicate using the Model Context Protocol. When an MCP message is received, this node triggers the workflow.
Prerequisites/Requirements
- n8n Instance: An active n8n instance where this workflow can be imported and run.
- Model Context Protocol (MCP) Client: An external AI agent or system capable of sending requests formatted according to the Model Context Protocol to the n8n instance's MCP endpoint.
Setup/Usage
- Import the workflow: Download the provided JSON and import it into your n8n instance.
- Activate the workflow: After importing, ensure the workflow is activated to start listening for incoming MCP requests.
- Configure your MCP Client: Point your AI agent or MCP client to the URL of your n8n instance's MCP endpoint (typically
YOUR_N8N_URL/webhook-test/mcp). - Send a request: Your MCP client can now send requests to this endpoint, which will trigger the workflow.
This workflow is a starting point. You would typically extend it by adding more nodes after the MCP Server Trigger to process the incoming data, interact with other services (like Lyft, as hinted by the directory name, although not present in this specific JSON), and return responses to the AI agent.
Related Templates
Generate Weather-Based Date Itineraries with Google Places, OpenRouter AI, and Slack
π§© What this template does This workflow builds a 120-minute local date course around your starting point by querying Google Places for nearby spots, selecting the top candidates, fetching real-time weather data, letting an AI generate a matching emoji, and drafting a friendly itinerary summary with an LLM in both English and Japanese. It then posts the full bilingual plan with a walking route link and weather emoji to Slack. π₯ Who itβs for Makers and teams who want a plug-and-play bilingual local itinerary generator with weather awareness β no custom code required. βοΈ How it works Trigger β Manual (or schedule/webhook). Discovery β Google Places nearby search within a configurable radius. Selection β Rank by rating and pick the top 3. Weather β Fetch current weather (via OpenWeatherMap). Emoji β Use an AI model to match the weather with an emoji π€οΈ. Planning β An LLM writes the itinerary in Markdown (JP + EN). Route β Compose a Google Maps walking route URL. Share β Post the bilingual itinerary, route link, and weather emoji to Slack. π§° Requirements n8n (Cloud or self-hosted) Google Maps Platform (Places API) OpenWeatherMap API key Slack Bot (chat:write) LLM provider (e.g., OpenRouter or DeepL for translation) π Setup (quick) Open Set β Fields: Config and fill in coords/radius/time limit. Connect Credentials for Google, OpenWeatherMap, Slack, and your LLM. Test the workflow and confirm the bilingual plan + weather emoji appear in Slack. π Customize Adjust ranking filters (type, min rating). Modify translation settings (target language or tone). Change output layout (side-by-side vs separated). Tune emoji logic or travel mode. Add error handling, retries, or logging for production use.
Translate documents to multiple languages with Google Drive and DeepL
Who's it for This workflow is perfect for content creators, international teams, and businesses that need to translate documents into multiple languages automatically. Whether you're localizing documentation, translating marketing materials, or creating multilingual content, this workflow saves hours of manual work. What it does Automatically monitors a Google Drive folder for new documents (PDF, DOCX, TXT, or Markdown) and translates them into multiple languages using DeepL API. Each translated document is saved with a language-specific filename (e.g., documenten.pdf, documentzh.pdf) in a designated folder. You receive an email notification when all translations are complete. How it works Monitors a Google Drive folder for new files Detects file format (PDF/DOCX/TXT/Markdown) and extracts text Translates the content into your chosen languages (default: English, Chinese, Korean, Spanish, French, German) Saves translated files with language codes in the filename Sends an email notification with translation summary Optional: Records translation history in Notion database Set up instructions Requirements Google Drive account (for file storage) DeepL API key (free tier: 500,000 characters/month) Gmail account (for notifications) Notion account (optional, for tracking translation history) Setup steps Create Google Drive folders: Create a "Source" folder for original files Create a "Translated" folder for output Copy the folder IDs from the URLs Get DeepL API key: Sign up at DeepL API Copy your API key Configure the workflow: Open the "Configuration (Edit Here)" node (yellow node) Replace folder IDs with your own Set your notification email Choose target languages Set up credentials: Add Google Drive OAuth2 credentials Add DeepL API credentials Add Gmail OAuth2 credentials Activate the workflow and upload a test file! Customization options Change target languages: Edit the targetLanguages array in the Configuration node (supports 30+ languages) Adjust polling frequency: Change trigger from "every minute" to hourly or daily for batch processing Enable Notion tracking: Set enableNotion to true and provide your database ID Add more file formats: Extend the Switch node to handle additional file types Filter by file size: Add conditions to skip files larger than a certain size Supported languages EN (English), ZH (Chinese), KO (Korean), JA (Japanese), ES (Spanish), FR (French), DE (German), IT (Italian), PT (Portuguese), RU (Russian), and 20+ more. Performance Short files (1 page): ~30 seconds for 6 languages Medium files (10 pages): ~2 minutes for 6 languages Large files (100 pages): ~15 minutes for 6 languages Technical Details Trigger: Google Drive folder monitoring (1-minute polling) Translation: DeepL API with automatic source language detection Loop implementation: Split Out + Aggregate pattern for parallel translation Error handling: Catches API failures and sends email alerts Storage: Original file format preserved in translated outputs Notes DeepL free tier provides 500,000 characters/month (approximately 250 pages) For high-volume translation, consider upgrading to DeepL Pro The workflow creates new files instead of overwriting, preserving translation history Google Docs are automatically converted to the appropriate format before translation What You'll Learn This workflow demonstrates several n8n patterns: File format detection and routing (Switch node) Loop implementation with Split Out + Aggregate Binary data handling for file operations Conditional logic with IF nodes (optional features) Cross-node data references Error handling and user notifications Perfect for learning automation best practices while solving a real business problem!
Auto-rename workflow nodes with AI (Gemini/Claude) for better readability (beta)
Rename Workflow Nodes with AI for Clarity This workflow automates the tedious process of renaming nodes in your n8n workflows. Instead of manually editing each node, it uses an AI language model to analyze its function and assign a concise, descriptive new name. This ensures your workflows are clean, readable, and easy to maintain. Who's it for? This template is perfect for n8n developers and power users who build complex workflows. If you often find yourself struggling to understand the purpose of different nodes at a glance or spend too much time manually renaming them for documentation, this tool will save you significant time and effort. How it works / What it does The workflow operates in a simple, automated sequence: Configure Suffix: A "Set" node at the beginning allows you to easily define the suffix that will be appended to the new workflow's name (e.g., "- new node names"). Fetch Workflow: It then fetches the JSON data of a specified n8n workflow using its ID. AI-Powered Renaming: The workflow's JSON is sent to an AI model (like Google Gemini or Anthropic Claude), which has been prompted to act as an n8n expert. The AI analyzes the type and parameters of each node to understand its function. Generate New Names: Based on this analysis, the AI proposes new, meaningful names and returns them in a structured JSON format. Update and Recreate: A Code Node processes these suggestions, updates all node names, and correctly rebuilds the connections and expressions. Create & Activate New Workflow: Finally, it creates a new workflow with the updated name, deactivates the original to avoid confusion, and activates the new version.