AI chatbot call center: Taxi service (Production-ready, part 3)
Workflow Name: ποΈ Taxi Service
Template was created in n8n v1.90.2
Skill Level: High
Categories: n8n, Chatbot
Stacks
- Execute Sub-workflow Trigger node
- Chat Trigger node
- Redis node
- Postgres node
- AI Agent node
- If node, Switch node, Code node, Edit Fields (Set)
Prerequisite
- Execute Sub-workflow Trigger: Taxi Service Workflow (or your own node)
- Sub-workflow: Taxi Service Provider (or your own node)
- Sub-workflow: Demo Call Back (or your own node)
Production Features
- Scaling Design for n8n Queue mode in production environment
- Service Data from external Database with Caching Mechanism
- Optional Long Terms Memory design
- Find Route Distance using Google Map API
- Optional Multi-Language Wait Output example
- Error Management
What this workflow does?
This is a n8n Taxi Service Workflow demo. It is the core node for Taxi Service. It will receive message from the Call Center Workflow, handling the QA from the caller, and pass to each of the Taxi Service Provider Workflow to process the estimation.
How it works
- The Flow Trigger node will wait for the message from Call Center or other Sub-workflow.
- When message is received, it will first check for the matching Service from the PostgreSQL database.
- If no service or service is inactive, output Error.
- Next, always reset the Session Data in Cache, with channel_no set to taxi
- Next, delete the previous Route Data in Cache
- Trigger a AI Agent to process the fare estimation question to create the Route Data
- Use the Google Map Route API to calculate the distance.
- Repeat until created the route data, then pass to all the Taxi Service Provider for an estimation.
Set up instructions
- Pull and Set up the required SQL from our Github repository.
- Create you Redis credentials, refer to n8n integration documentation for more information.
- Select your Credentials in Service Cache, Save Service Cache, Reset Session, Delete Route Data, Route Data, Update User Session and Create Route Data.
- Create you Postgres credentials, refer to n8n integration documentation for more information.
- Select your Credentials in Load Service Data, Postgres Chat Memory, Load User Memory and Save User Memory.
- Modify the AI Agent prompt to fit your need
- Set you Google Map API key in Find Route Distance
How to adjust it to your needs
- By default, this template will use the sys_service table provider information, you could change it for your own design.
- You can use any AI Model for the AI Agent node
- Learn we use the prompt for the Load/Save User Memory on demand.
- Include is our prompt for the taxi service. It is a flexible design which use the data from the Service node to customize the prompt, so you could duplicate this workflow as another service.
- Create difference Taxi Providers to process the and feedback the estimate.
n8n AI Chatbot for Call Center/Taxi Service (Part 3)
This n8n workflow, titled "4046-ai-chatbot-call-center-taxi-service-production-ready-part-3", is designed to power a production-ready AI chatbot, likely for call center or taxi service applications. It leverages Langchain agents and memory to create a conversational AI that can respond dynamically and maintain context.
What it does
This workflow acts as the core logic for an AI chatbot, processing incoming chat messages, utilizing AI agents for intelligent responses, and maintaining conversational memory.
- Triggers on Chat Message: Initiates the workflow whenever a new chat message is received, serving as the entry point for user interaction.
- Initial Data Preparation: Edits and prepares the incoming chat message data for processing by the AI agent.
- Conditional Logic for Workflow Execution: Uses an "If" node to determine whether to proceed with AI agent processing or execute a sub-workflow.
- Executes Sub-workflow (Conditional): If a specific condition is met, it calls another n8n workflow, allowing for modularity and specialized handling of certain requests.
- Handles Sub-workflow Trigger: Provides an entry point for other workflows to execute this workflow as a sub-process.
- AI Agent Processing: Routes the chat message to an AI Agent (Langchain Agent) for natural language understanding and response generation.
- Manages Chat Memory (Postgres): Utilizes a Postgres database to store and retrieve chat history, enabling the AI agent to maintain context throughout the conversation.
- Routes AI Agent Output: Employs a "Switch" node to direct the AI agent's output based on different conditions or response types.
- Generates AI Responses (xAI Grok): Uses the xAI Grok Chat Model as the underlying Language Model for the AI agent to generate human-like responses.
- Custom Code Logic: Includes a "Code" node for executing custom JavaScript logic, allowing for flexible data manipulation or advanced processing steps.
- Workflow Documentation: Includes a "Sticky Note" for internal documentation or comments within the workflow.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running n8n instance to import and execute the workflow.
- Postgres Database: Access to a Postgres database for storing chat memory. You'll need to configure the Postgres credentials within the
Postgres Chat Memorynode. - xAI Grok API Key/Access: Credentials or API access for the xAI Grok Chat Model. This will be configured in the
xAI Grok Chat Modelnode. - Langchain Integration: Ensure your n8n instance has the Langchain nodes installed and configured.
- Related n8n Workflows: If the "Execute Workflow" node is active, you will need the sub-workflow it calls.
Setup/Usage
- Import the Workflow: Download the JSON provided and import it into your n8n instance.
- Configure Credentials:
- Set up your Postgres credentials in the
Postgres Chat Memorynode. - Configure your xAI Grok Chat Model credentials in the respective node.
- Set up your Postgres credentials in the
- Review Node Configurations:
- Examine the
Edit Fieldsnode to understand how incoming data is transformed. - Adjust the conditions in the
Ifnode andSwitchnode according to your specific routing logic. - If using the
Execute Workflownode, ensure the referenced sub-workflow exists and is correctly configured. - Customize the
Codenode for any specific business logic or data manipulation.
- Examine the
- Activate the Workflow: Once configured, activate the workflow to start processing chat messages.
This workflow provides a robust foundation for building an intelligent AI chatbot, capable of dynamic interactions and contextual understanding.
Related Templates
Track SDK documentation drift with GitHub, Notion, Google Sheets, and Slack
π Description Automatically track SDK releases from GitHub, compare documentation freshness in Notion, and send Slack alerts when docs lag behind. This workflow ensures documentation stays in sync with releases, improves visibility, and reduces version drift across teams. πππ¬ What This Template Does Step 1: Listens to GitHub repository events to detect new SDK releases. π§© Step 2: Fetches release metadata including version, tag, and publish date. π¦ Step 3: Logs release data into Google Sheets for record-keeping and analysis. π Step 4: Retrieves FAQ or documentation data from Notion. π Step 5: Merges GitHub and Notion data to calculate documentation drift. π Step 6: Flags SDKs whose documentation is over 30 days out of date. β οΈ Step 7: Sends detailed Slack alerts to notify responsible teams. π Key Benefits β Keeps SDK documentation aligned with product releases β Prevents outdated information from reaching users β Provides centralized release tracking in Google Sheets β Sends real-time Slack alerts for overdue updates β Strengthens DevRel and developer experience operations Features GitHub release trigger for real-time monitoring Google Sheets logging for tracking and auditing Notion database integration for documentation comparison Automated drift calculation (days since last update) Slack notifications for overdue documentation Requirements GitHub OAuth2 credentials Notion API credentials Google Sheets OAuth2 credentials Slack Bot token with chat:write permissions Target Audience Developer Relations (DevRel) and SDK engineering teams Product documentation and technical writing teams Project managers tracking SDK and doc release parity Step-by-Step Setup Instructions Connect your GitHub account and select your SDK repository. Replace YOURGOOGLESHEETID and YOURSHEET_GID with your tracking spreadsheet. Add your Notion FAQ database ID. Configure your Slack channel ID for alerts. Run once manually to validate setup, then enable automation.
Automate Gmail responses with GPT and human-in-the-loop verification
Try It Out! This n8n template uses AI to automatically respond to your Gmail inbox by drafting response for your approval via email. How it works Gmail Trigger monitors your inbox for new emails AI Analysis determines if a response is needed based on your criteria Draft Generation creates contextually appropriate replies using your business information Human Approval sends you the draft for review before sending Auto-Send replies automatically once approved Setup Connect your Gmail account to the Gmail Trigger node Update the "Your Information" node with: Entity name and description Approval email address Resource guide (FAQs, policies, key info) Response guidelines (tone, style, formatting preferences) Configure your LLM provider (OpenAI, Claude, Gemini, etc.) with API credentials Test with a sample email Requirements n8n instance (self-hosted or cloud) Gmail account with API access LLM provider API key Need Help? Email Nick @ nick@tropicflare.com
Automate job matching with Gemini AI, Decodo scraping & resume analysis to Telegram
AI Job Matcher with Decodo, Gemini AI & Resume Analysis Sign up for Decodo β get better pricing here Whoβs it for This workflow is built for job seekers, recruiters, founders, automation builders, and data engineers who want to automate job discovery and intelligently match job listings against resumes using AI. Itβs ideal for anyone building job boards, candidate matching systems, hiring pipelines, or personal job alert automations using n8n. What this workflow does This workflow automatically scrapes job listings from SimplyHired using Decodo residential proxies, extracts structured job data with a Gemini AI agent, downloads resumes from Google Drive, extracts and summarizes resume content, and surfaces the most relevant job opportunities. The workflow stores structured results in a database and sends real-time notifications via Telegram, creating a scalable and low-maintenance AI-powered job matching pipeline. How it works A schedule trigger starts the workflow automatically Decodo fetches job search result pages from SimplyHired Job card HTML is extracted from the page A Gemini AI agent converts raw HTML into structured job data Resume PDFs are downloaded from Google Drive Resume text is extracted from PDF files A Gemini AI agent summarizes key resume highlights Job and resume data are stored in a database Matching job alerts are sent via Telegram How to set up Add your Decodo API credentials Add your Google Gemini API key Connect Google Drive for resume access Configure your Telegram bot Set up your database (Google Sheets by default) Update the job search URL with your keywords and location Requirements Self-hosted n8n instance Decodo account (community node) Google Gemini API access Google Drive access Telegram Bot token Google Sheets or another database > Note: This template uses a community node (Decodo) and is intended for self-hosted n8n only. How to customize the workflow Replace SimplyHired with another job board or aggregator Add jobβresume matching or scoring logic Extend the resume summary with custom fields Swap Google Sheets for PostgreSQL, Supabase, or Airtable Route notifications to Slack, Email, or Webhooks Add pagination or multi-resume processing