Creating a AI Slack bot with Google Gemini
This is an example of how we can build a slack bot in a few easy steps
Before you can start, you need to o a few things
- Create a copy of this workflow
- Create a slack bot
- Create a slash command on slack and paste the webhook url to the slack command
Note
Make sure to configure this webhook using a https:// wrapper and don't use the default http://localhost:5678 as that will not be recognized by your slack webhook.
Once the data has been sent to your webhook, the next step will be passing it via an AI Agent to process data based on the queries we pass to our agent.
To have some sort of a memory, be sure to set the slack token to the memory node. This way you can refer to other chats from the history.
The final message is relayed back to slack as a new message. Since we can not wait longer than 3000 ms for slack response, we will create a new message with reference to the input we passed.
We can advance this using the tools or data sources for it to be more custom tailored for your company.
Usage
To use the slackbot, go to slack and click on your set slash command eg /Bob and send your desired message.
This will send the message to your endpoint and get return the processed results as the message.
If you would like help setting this up, feel free to reach out to zacharia@effibotics.com
n8n AI Slack Bot with Google Gemini
This n8n workflow demonstrates how to create an AI-powered Slack bot that responds to messages using Google Gemini. It listens for incoming Slack messages via a webhook, processes them with an AI agent, and then posts the AI's response back to Slack.
What it does
This workflow automates the following steps:
- Listens for Slack Messages: A webhook endpoint is exposed to receive incoming Slack messages.
- Initializes AI Agent: Sets up an AI agent powered by LangChain.
- Configures Simple Memory: Integrates a simple memory buffer to maintain conversational context.
- Connects to Google Gemini: Utilizes the Google Gemini Chat Model as the underlying large language model for the AI agent.
- Generates AI Response: The AI agent processes the incoming message and generates a relevant response.
- Posts Response to Slack: The AI's generated response is sent back to the Slack channel where the original message originated.
Prerequisites/Requirements
To use this workflow, you will need:
- n8n Instance: A running n8n instance.
- Slack Account: A Slack workspace where you want to deploy the bot.
- Slack App: A Slack app configured with a webhook to send messages to n8n.
- Google Gemini API Key: Access to the Google Gemini API.
- LangChain Nodes: The
@n8n/n8n-nodes-langchainpackage installed in your n8n instance.
Setup/Usage
- Import the Workflow:
- Copy the provided JSON code.
- In your n8n instance, click "Workflows" in the left sidebar.
- Click "New" and then "Import from JSON".
- Paste the JSON and click "Import".
- Configure Credentials:
- Slack: You will need to create a Slack credential to allow n8n to post messages.
- In the "Slack" node, select or create a new Slack API credential.
- Ensure your Slack app has the necessary permissions (e.g.,
chat:write,channels:read).
- Google Gemini: You will need to create a Google Gemini credential.
- In the "Google Gemini Chat Model" node, select or create a new Google Gemini API credential.
- Slack: You will need to create a Slack credential to allow n8n to post messages.
- Configure Webhook:
- In the "Webhook" node, set the "Webhook URL" to "POST" and "Path" to a unique identifier (e.g.,
/slack-bot). - Activate the workflow.
- Copy the generated webhook URL.
- In your Slack app settings, configure an "Event Subscription" or "Slash Command" to send messages to this n8n webhook URL.
- In the "Webhook" node, set the "Webhook URL" to "POST" and "Path" to a unique identifier (e.g.,
- Customize AI Agent (Optional):
- The "AI Agent" node can be customized with different agent types, tools, and prompts depending on your needs.
- The "Simple Memory" node stores recent conversational turns. You can adjust the
kvalue to change how many previous messages it remembers.
- Activate the Workflow: Ensure the workflow is active to start listening for Slack messages.
Once configured, your Slack bot will be ready to respond to messages in the channels it's invited to, powered by Google Gemini.
Related Templates
AI-powered code review with linting, red-marked corrections in Google Sheets & Slack
Advanced Code Review Automation (AI + Lint + Slack) Who’s it for For software engineers, QA teams, and tech leads who want to automate intelligent code reviews with both AI-driven suggestions and rule-based linting — all managed in Google Sheets with instant Slack summaries. How it works This workflow performs a two-layer review system: Lint Check: Runs a lightweight static analysis to find common issues (e.g., use of var, console.log, unbalanced braces). AI Review: Sends valid code to Gemini AI, which provides human-like review feedback with severity classification (Critical, Major, Minor) and visual highlights (red/orange tags). Formatter: Combines lint and AI results, calculating an overall score (0–10). Aggregator: Summarizes results for quick comparison. Google Sheets Writer: Appends results to your review log. Slack Notification: Posts a concise summary (e.g., number of issues and average score) to your team’s channel. How to set up Connect Google Sheets and Slack credentials in n8n. Replace placeholders (<YOURSPREADSHEETID>, <YOURSHEETGIDORNAME>, <YOURSLACKCHANNEL_ID>). Adjust the AI review prompt or lint rules as needed. Activate the workflow — reviews will start automatically whenever new code is added to the sheet. Requirements Google Sheets and Slack integrations enabled A configured AI node (Gemini, OpenAI, or compatible) Proper permissions to write to your target Google Sheet How to customize Add more linting rules (naming conventions, spacing, forbidden APIs) Extend the AI prompt for project-specific guidelines Customize the Slack message formatting Export analytics to a dashboard (e.g., Notion or Data Studio) Why it’s valuable This workflow brings realistic, team-oriented AI-assisted code review to n8n — combining the speed of automated linting with the nuance of human-style feedback. It saves time, improves code quality, and keeps your team’s review history transparent and centralized.
Generate Weather-Based Date Itineraries with Google Places, OpenRouter AI, and Slack
🧩 What this template does This workflow builds a 120-minute local date course around your starting point by querying Google Places for nearby spots, selecting the top candidates, fetching real-time weather data, letting an AI generate a matching emoji, and drafting a friendly itinerary summary with an LLM in both English and Japanese. It then posts the full bilingual plan with a walking route link and weather emoji to Slack. 👥 Who it’s for Makers and teams who want a plug-and-play bilingual local itinerary generator with weather awareness — no custom code required. ⚙️ How it works Trigger – Manual (or schedule/webhook). Discovery – Google Places nearby search within a configurable radius. Selection – Rank by rating and pick the top 3. Weather – Fetch current weather (via OpenWeatherMap). Emoji – Use an AI model to match the weather with an emoji 🌤️. Planning – An LLM writes the itinerary in Markdown (JP + EN). Route – Compose a Google Maps walking route URL. Share – Post the bilingual itinerary, route link, and weather emoji to Slack. 🧰 Requirements n8n (Cloud or self-hosted) Google Maps Platform (Places API) OpenWeatherMap API key Slack Bot (chat:write) LLM provider (e.g., OpenRouter or DeepL for translation) 🚀 Setup (quick) Open Set → Fields: Config and fill in coords/radius/time limit. Connect Credentials for Google, OpenWeatherMap, Slack, and your LLM. Test the workflow and confirm the bilingual plan + weather emoji appear in Slack. 🛠 Customize Adjust ranking filters (type, min rating). Modify translation settings (target language or tone). Change output layout (side-by-side vs separated). Tune emoji logic or travel mode. Add error handling, retries, or logging for production use.
AI-powered document search with Oracle and ONNX embeddings for recruiting
How it works Create a user for doing Hybrid Search. Clear Existing Data, if present. Add Documents into the table. Create a hybrid index. Run Semantic search on the Documents table for "prioritize teamwork and leadership experience". Run Hybrid search for the text input in the Chat interface on the Documents table. Setup Steps Download the ONNX model allMiniLML12v2augmented.zip Extract the ZIP file on the database server into a directory, for example /opt/oracle/onnx. After extraction, the folder contents should look like: bash bash-4.4$ pwd /opt/oracle/onnx bash-4.4$ ls allMiniLML12_v2.onnx Connect as SYSDBA and create the DBA user sql -- Create DBA user CREATE USER app_admin IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; -- Grant privileges GRANT DBA TO app_admin; GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE TO app_admin; Create n8n Oracle DB credentials hybridsearchuser → for hybrid search operations dbadocuser → for DBA setup (user and tablespace creation) Run the workflow Click the manual Trigger It displays Pure semantic search results. Enter search text in Chat interface It displays results for vector and keyword search. Note The workflow currently creates the hybrid search user, docuser with the password visible in plain text inside the n8n Execute SQL node. For better security, consider performing the user creation manually outside n8n. Oracle 23ai or 26ai Database has to be used. Reference Hybrid Search End-End Example