531 templates found
Category:
Author:
Sort:

RAG chatbot for company documents using Google Drive and Gemini

This workflow implements a Retrieval Augmented Generation (RAG) chatbot that answers employee questions based on company documents stored in Google Drive. It automatically indexes new or updated documents in a Pinecone vector database, allowing the chatbot to provide accurate and up-to-date information. The workflow uses Google's Gemini AI for both embeddings and response generation. How it works The workflow uses two Google Drive Trigger nodes: one for detecting new files added to a specified Google Drive folder, and another for detecting file updates in that same folder. Automated Indexing: When a new or updated document is detected The Google Drive node downloads the file. The Default Data Loader node loads the document content. The Recursive Character Text Splitter node breaks the document into smaller text chunks. The Embeddings Google Gemini node generates embeddings for each text chunk using the text-embedding-004 model. The Pinecone Vector Store node indexes the text chunks and their embeddings in a specified Pinecone index. 7.The Chat Trigger node receives user questions through a chat interface. The user's question is passed to an AI Agent node. The AI Agent node uses a Vector Store Tool node, linked to a Pinecone Vector Store node in query mode, to retrieve relevant text chunks from Pinecone based on the user's question. The AI Agent sends the retrieved information and the user's question to the Google Gemini Chat Model (gemini-pro). The Google Gemini Chat Model generates a comprehensive and informative answer based on the retrieved documents. A Window Buffer Memory node connected to the AI Agent provides short-term memory, allowing for more natural and context-aware conversations. Set up steps Google Cloud Project and Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API Key: Obtain a Google AI API key from Google AI Studio. Pinecone Account: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-files in your Pinecone project. Google Drive: Create a dedicated folder in your Google Drive where company documents will be stored. Credentials in n8n: Configure credentials in your n8n environment for: Google Drive OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import the Workflow: Import this workflow into your n8n instance. Configure the Workflow: Update both Google Drive Trigger nodes to watch the specific folder you created in your Google Drive. Configure the Pinecone Vector Store nodes to use your company-files index.

Mihai FarcasBy Mihai Farcas
245944

Generate & auto-post AI videos to social media with Veo3 and Blotato

Automate video creation with Veo3 and auto-post to Instagram, TikTok via Blotato Who is this for? This template is ideal for content creators, social media managers, YouTubers, and digital marketers who want to generate high-quality videos daily using AI and distribute them effortlessly across multiple platforms. It’s perfect for anyone who wants to scale short-form content creation without video editing tools. What problem is this workflow solving? Creating and distributing consistent video content requires: Generating ideas Writing scripts and prompts Rendering videos Manually posting to platforms This workflow automates all of that. It transforms one prompt into a professional AI-generated video and publishes it automatically — saving time and increasing reach. What this workflow does Triggers daily to generate a new idea with OpenAI (or your custom prompt). Creates a video prompt formatted specifically for Google Veo3. Generates a cinematic video using the Veo3 API. Logs the video data into a Google Sheet. Retrieves the final video URL once Veo3 finishes rendering. Uploads the video to Blotato for publishing. Auto-posts the video to Instagram, TikTok, YouTube, Facebook, LinkedIn, Threads, Twitter (X), Pinterest, and Bluesky. Setup Add your OpenAI API key to the GPT-4.1 nodes. Connect your Veo3 API credentials in the video generation node. Link your Google Sheets account and use a sheet with columns: Prompt, Video URL, Status. Connect your Blotato API key and set your platform IDs in the Assign Social Media IDs node. Adjust the Schedule Trigger to your desired posting frequency. How to customize this workflow to your needs Edit the AI prompt to align with your niche (fitness, finance, education, etc.). Add your own branding overlays using JSON2Video or similar tools. Change platform selection by enabling/disabling specific HTTP Request nodes. Add a Telegram step to preview the video before auto-posting. Track performance by adding metrics columns in Google Sheets. 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube

Dr. FirasBy Dr. Firas
206149

AI-powered WhatsApp chatbot for text, voice, images, and PDF with RAG

Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }

NovaNodeBy NovaNode
157516

Create & upload AI-generated ASMR YouTube Shorts with Seedance, Fal AI, and GPT-4

//ASMR AI Workflow Who is this for? Content Creators, YouTube Automation Enthusiasts, and AI Hobbyists looking to autonomously generate and publish unique, satisfying ASMR-style YouTube Shorts without manual effort. What problem does this solve? This workflow solves the creative bottleneck and time-consuming nature of daily content creation. It fully automates the entire production pipeline, from brainstorming trendy ideas to publishing a finished video, turning your n8n instance into a 24/7 content factory. What this workflow does Two-Stage AI Ideation & Planning: Uses an initial AI agent to brainstorm a short, viral ASMR concept based on current trends. A second "Planning" AI agent then takes this concept and expands it into a detailed, structured production plan, complete with a viral-optimized caption, hashtags, and descriptions for the environment and sound. Multi-Modal Asset Generation: Video: Feeds detailed scene prompts to the ByteDance Seedance text-to-video model (via Wavespeed AI) to generate high-quality video clips. Audio: Simultaneously calls the Fal AI text-to-audio model to create custom, soothing ASMR sound effects that match the video's theme. Assembly: Automatically sequences the video clips and sound into a single, cohesive final video file using an FFMPEG API call. Closed-Loop Publishing & Logging: Logging: Initially logs the new idea to a Google Sheet with a status of "In Progress". Publishing: Automatically uploads the final, assembled video directly to your YouTube channel, setting the title and description from the AI's plan. Updating: Finds the original row in the Google Sheet and updates its status to "Done", adding a direct link to the newly published YouTube video. Notifications: Sends real-time alerts to Telegram and/or Gmail with the video title and link, confirming the successful publication. Setup Credentials: You will need to create credentials in your n8n instance for the following services: OpenAI API Wavespeed AI API (for Seedance) Fal AI API Google OAuth Credential (enable YouTube Data API v3 and Google Sheets API in your Google Cloud Project) Telegram Bot Credential (Optional) Gmail OAuth Credential Configuration: This is an advanced workflow. The initial setup should take approximately 15-20 minutes. Google Sheet: Create a Google Sheet with these columns: idea, caption, productionstatus, youtubeurl. Add the Sheet ID to the Google Sheets nodes in the workflow. Node Configuration: In the Telegram Notification node, enter your own Chat ID. In the Gmail Notification node, update the recipient email address. Activate: Once configured, save and set the workflow to "Active" to let it run on its schedule. How to customize Creative Direction: To change the style or theme of the videos (e.g., from kinetic sand to soap cutting), simply edit the systemMessage in the "2. Enrich Idea into Plan" and "Prompts AI Agent" nodes. Initial Ideas: To influence the AI's starting concepts, modify the prompt in the "1. Generate Trendy Idea" node. Video & Sound: To change the video duration or sound style, adjust the parameters in the "Create Clips" and "Create Sounds" nodes. Notifications: Add or remove notification channels (like Slack or Discord) after the "Upload to YouTube" node.

Bilel ArouaBy Bilel Aroua
145319

Learn n8n basics in 3 easy steps ✨

New to n8n? This simple tutorial is the perfect way to get started. In just a few minutes, you’ll build your first automation that runs on a schedule, fetches fresh data from the internet and delivers it straight to your inbox. What you’ll do Run the workflow to grab a random inspirational quote. See how data flows through each node as it moves from an API call to processing results. Connect Gmail and send the quote directly to your email. What you’ll learn How to trigger workflows manually and on a schedule ⏰ How to connect to external APIs and fetch data 🌐 How to use the Set node to structure and map data 🔧 How to connect Gmail to send the data 📩 Why it matters This workflow shows you the n8n basics step by step - no code required. By the end, you’ll know how to build, test, and share automations that run on their own, giving you the confidence to explore more advanced use cases. 🚀

MihaBy Miha
142162

Clone viral TikToks with AI avatars & auto-post to 9 platforms using Perplexity & Blotato

Clone a viral TikTok with AI and auto-post it to 9 platforms using Perplexity & Blotato Who is this for? This workflow is perfect for: Content creators looking to repurpose viral content Social media managers who want to scale short-form content across multiple platforms Entrepreneurs and marketers aiming to save time and boost visibility with AI-powered automation What problem is this workflow solving? Reproducing viral video formats with your own branding and pushing them to multiple platforms is time-consuming and hard to scale. This workflow solves that by: Cloning a viral TikTok video’s structure Generating a new version with your avatar Rewriting the script, caption, and overlay text Auto-posting it to 9 social media platforms — without manual uploads What this workflow does From a simple Telegram message with a TikTok link, the workflow: Downloads a TikTok video and extracts its thumbnail, audio, and caption Transcribes the audio and saves original text into Google Sheets Uses Perplexity AI to suggest a new content idea in the same niche Rewrites the script, caption, and overlay using GPT-4o Generates a new video with your avatar using Captions.ai Adds subtitles and overlay text with JSON2Video Saves metadata to Google Sheets for tracking Sends the final video to Telegram for preview Auto-publishes the video to Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, X (Twitter), Pinterest, and Bluesky via Blotato Setup Connect your Telegram bot to the trigger node. Add your OpenAI, Perplexity, Cloudinary, Captions.ai, and Blotato API keys. Make sure your Google Sheet is ready with the appropriate columns. Replace the default avatar name in the Captions.ai node with yours. Fill in your social media account IDs in the "Assign Platform IDs" node. Test by sending a TikTok URL to your Telegram bot. How to customize this workflow to your needs Change avatar output style: adjust resolution, voice, or avatar ID. Refine script structure: tweak GPT instructions for different tone/format. Swap Perplexity with ChatGPT or Claude if needed. Filter by platform: disable any Blotato nodes you don’t need. Add approval step: insert a Telegram confirmation node before publishing. Adjust subtitle style or overlay text font in JSON2Video. 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube

Dr. FirasBy Dr. Firas
122602

AI-powered short-form video generator with OpenAI, Flux, Kling, and ElevenLabs

Who is this for? Content creators, digital marketers, and social media managers who want to automate the creation of short-form videos for platforms like TikTok, YouTube Shorts, and Instagram Reels without extensive video editing skills. What problem does this workflow solve? Creating engaging short-form videos consistently is time-consuming and requires multiple tools and skills. This workflow automates the entire process from ideation to publishing, significantly reducing the manual effort needed while maintaining content quality. What this workflow does This all-in-one solution transforms ideas into fully produced short-form videos through a 5-step process: Generate video captions from ideas stored in a Google Sheet Create AI-generated images using Flux and the OpenAI API Convert images to videos using Kling's API Add voice-overs to your content with Eleven Labs Complete the video production with Creatomate by adding templates, transitions, and combining all elements The workflow handles everything from sourcing content ideas to rendering the final video, and even notifies you on Discord when videos are ready. Setup (Est. time: 20-30 minutes) Before getting started, you'll need: n8n installation (tested on version 1.81.4) OpenAI API Key (free trial credits available) PiAPI (free trial credits available) Eleven Labs (free account) Creatomate API Key (free trial credits available) Google Sheets API enabled in Google Cloud Console Google Drive API enabled in Google Cloud Console OAuth 2.0 Client ID and Client Secret from your Google Cloud Console Credentials How to customize this workflow to your needs Adjust the Google Sheet structure to include additional data like video length, duration, style, etc. Modify the prompt templates for each AI service to match your brand voice and content style Update the Creatomate template to reflect your visual branding Configure notification preferences in Discord to manage your workflow This workflow combines multiple AI technologies to create a seamless content production pipeline, saving you hours of work per video and allowing you to focus on strategy rather than production.

Cameron WillsBy Cameron Wills
109344

⚡AI-powered YouTube video summarization & analysis

-- Disclaimer: This workflow uses a community node and therefore only works for self-hosted n8n users -- Transform YouTube videos into comprehensive summaries and structured analysis instantly. This n8n workflow automatically extracts, processes, and analyzes video transcripts to deliver clear, organized insights without watching the entire video. Time-Saving Features 🚀 Instant Processing Simply provide a YouTube URL and receive a structured summary within seconds, eliminating the need to watch lengthy videos. Perfect for research, learning, or content analysis. 🤖 AI-Powered Analysis Leverages GPT-4o-mini to analyze video transcripts, organizing key concepts and insights into a clear, hierarchical structure with main topics and essential points. Smart Processing Pipeline 📝 Automated Transcript Extraction Supports public YouTube video Handles multiple URL formats Extracts complete video transcripts automatically 🧠 Intelligent Content Organization Breaks down content into main topics Highlights key concepts and terminology Maintains technical accuracy while improving clarity Structures information logically with markdown formatting Perfect For 📚 Researchers & Students Quick comprehension of educational content and lectures without watching entire videos. 💼 Business Professionals Efficient analysis of industry talks, presentations, and training materials. 🎯 Content Creators Rapid research and competitive analysis of video content in your niche. Technical Implementation 🔄 Workflow Components Webhook endpoint for URL submission YouTube API integration for video details Transcript extraction system GPT-4 powered analysis engine Telegram notification system (optional) Transform your video content consumption with an intelligent system that delivers structured, comprehensive summaries while saving hours of viewing time.

Joseph LePageBy Joseph LePage
106903

Generate SQL queries from schema only - AI-powered

This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.

YuliaBy Yulia
52198

🤖 AI content generation for auto service 🚘 automate your social media📲!

Who Is This For? 🚘This workflow is designed for auto service / car repair businesses looking to streamline their social media marketing with Ai tools and automation via n8n. Whether you’re a small garage owner, a car repair shop, an automotive specialist or automechanic - this tool helps you maintain an active online presence without spending hours creating content. 💪🏼 Though this template is set up for Auto Service daily content uploads, but the underlying logic is universal. You can easily adapt it for any niche by editing prompts, adding nodes, and creating or uploading a variety of content to any platform. You can use any LLM and generative AI of your choice. Personally, I prefer the smooth and effective results from ChatGPT 4.1 combined with GPT Image 1. But you can generate videos instead of images for your posts as well 😉 What Problem Is This Workflow Solving? 🤦‍♂️ Many auto service businesses struggle with consistently posting engaging content due to time constraints or lack of marketing expertise. 💎This workflow solves that by automating the content creation and posting process, ensuring your social media channels stay fresh and active, ultimately attracting more customers. What This Workflow Does: Generates daily social media posts tailored specifically to the auto service niche using AI. Allows easy customization of post and image prompts. Integrates research links through the Tavily Internet Search tool for relevant content. Supports starting posts based on reference article links via Google Sheets. Automatically publishes posts to your connected social media platforms. Enables scheduled or trigger-based posting for maximum flexibility. How It Works? Easy, actually ☺️ AI creates daily social media content made just for Auto Service. You can simply edit prompts for both posts and images, set up news or articles research links via the Tavily Internet Search tool. You can also start the workflow with a reference article link through Google Sheets. 🎯 Consistently posting relevant and actual niche content helps attract new customers, all while leveraging AI and n8n tools to keep the process seamless and cost-effective. Set Up Steps: I kept it quick and simple for you ✨ If you’re happy with the current LLM and image model configurations, just connect your OpenAI API credentials to enable AI content generation. Link your social media accounts (Facebook, Telegram, X, etc.) for autoposting. Optionally connect Google Sheets if you want to trigger posts based on sheet updates with reference links. Customize prompts as needed to match your brand voice, style and marketing tasks. Choose between: 1) Scheduled automatic generation and posting at the same time as socials algorythms like it. 2) Google Sheets trigger with reference. 3) Manual start. How to Customize This Workflow to Your Needs? Switch AI models and edit prompts to better reflect your specific services or promotions. Add or modify research links in Tavily to keep your posts timely and relevant. Adjust posting schedules to match peak engagement times for your audience. Expand or reduce the number of social platforms integrated depending on your marketing strategy. Use Google Sheets to batch upload ideas or curate specific content topics. After adjusting a few settings, activate the workflow and let it run. 🤖 The system will then automatically publish your content across your selected social platforms — saving you time and effort. 📌 You’ll find more detailed tips and additional ai models for customizing ai generation process inside the workflow via sticky notes. 💬 Need help? For additional guidance, feel free to message me — here’s my profile in the n8n community for direct contact 👈 click!

N8nerBy N8ner
46034

Generate Text-to-Speech Using Elevenlabs via API

🎉 Do you want to master AI automation, so you can save time and build cool stuff? I’ve created a welcoming Skool community for non-technical yet resourceful learners. 👉🏻 Join the AI Atelier 👈🏻 --- This workflow provides an API endpoint to generate speech from text using Elevenlabs.io, a popular text-to-speech service. Step 1: Configure Custom Credentials in n8n To set up your credentials in n8n, create a new custom authentication entry with the following JSON structure: json { "headers": { "xi-api-key": "your-elevenlabs-api-key" } } Replace "your-elevenlabs-api-key" with your actual Elevenlabs API key. Step 2: Send a POST Request to the Webhook Send a POST request to the workflow's webhook endpoint with these two parameters: voice_id: The ID of the voice from Elevenlabs that you want to use. text: The text you want to convert to speech. This workflow has been a significant time-saver in my video production tasks. I hope it proves just as useful to you! Happy automating! The n8Ninja

Emmanuel Bernard - n8n Expert LausanneBy Emmanuel Bernard - n8n Expert Lausanne
39682

IT ops AI SlackBot workflow - chat with your knowledge base

Video Demo: Click here to see a video of this workflow in action. Summary Description: The "IT Department Q&A Workflow" is designed to streamline and automate the process of handling IT-related inquiries from employees through Slack. When an employee sends a direct message (DM) to the IT department's Slack channel, the workflow is triggered. The initial step involves the "Receive DMs" node, which listens for new messages. Upon receiving a message, the workflow verifies the webhook by responding to Slack's challenge request, ensuring that the communication channel is active and secure. Once the webhook is verified, the workflow checks if the message sender is a bot using the "Check if Bot" node. If the sender is identified as a bot, the workflow terminates the process to avoid unnecessary actions. If the sender is a human, the workflow sends an acknowledgment message back to the user, confirming that their query is being processed. This is achieved through the "Send Initial Message" node, which posts a simple message like "On it!" to the user's Slack channel. The core functionality of the workflow is powered by the "AI Agent" node, which utilizes the OpenAI GPT-4 model to interpret and respond to the user's query. This AI-driven node processes the text of the received message, generating an appropriate response based on the context and information available. To maintain conversation context, the "Window Buffer Memory" node stores the last five messages from each user, ensuring that the AI agent can provide coherent and contextually relevant answers. Additionally, the workflow includes a custom Knowledge Base (KB) tool (see that tool template here) that integrates with the AI agent, allowing it to search the company's internal KB for relevant information. After generating the response, the workflow cleans up the initial acknowledgment message using the "Delete Initial Message" node to keep the conversation thread clean. Finally, the generated response is sent back to the user via the "Send Message" node, providing them with the information or assistance they requested. This workflow effectively automates the IT support process, reducing response times and improving efficiency. To quickly deploy the Knowledge Ninja app in Slack, use the app manifest below and don't forget to replace the two sample urls: { "display_information": { "name": "Knowledge Ninja", "description": "IT Department Q&A Workflow", "background_color": "005e5e" }, "features": { "bot_user": { "display_name": "IT Ops AI SlackBot Workflow", "always_online": true } }, "oauth_config": { "redirect_urls": [ "Replace everything inside the double quotes with your slack redirect oauth url, for example: https://n8n.domain.com/rest/oauth2-credential/callback" ], "scopes": { "user": [ "search:read" ], "bot": [ "chat:write", "chat:write.customize", "groups:history", "groups:read", "groups:write", "groups:write.invites", "groups:write.topic", "im:history", "im:read", "im:write", "mpim:history", "mpim:read", "mpim:write", "mpim:write.topic", "usergroups:read", "usergroups:write", "users:write", "channels:history" ] } }, "settings": { "event_subscriptions": { "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad", "bot_events": [ "message.im" ] }, "orgdeployenabled": false, "socketmodeenabled": false, "tokenrotationenabled": false } }

Angel MenendezBy Angel Menendez
39013