Yulia
Let's connect!
Templates by Yulia
Generate SQL queries from schema only - AI-powered
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
Telegram AI bot assistant: ready-made template for voice & text messages
Free template for voice & text messages with short-term memory This n8n workflow template is a blueprint for an AI Telegram bot that processes both voice and text messages. Ready to use with minimal setup. The bot remembers the last several messages (10 by default), understands commands and provides responses in HTML. You can easily swap GPT-4 and Whisper for other language and speech-to-text models to suit your needs. Core Features Text: send or forward messages Voice: transcription via Whisper Extend this template by adding LangChain tools. Requirements Telegram Bot API OpenAI API (for GPT-4 and Whisper) 💡 New to Telegram bots? Check our step-by-step guide on creating your first bot and setting up OpenAI access. Use Cases Personal AI assistant Customer support automation Knowledge base interface Integration hub for services that you use: Connect to any API via HTTP Request Tool Trigger other n8n workflows with Workflow Tool
Qualify new leads in Google Sheets via OpenAI's GPT-4
This n8n workflow was developed to evaluate and categorize incoming leads based on certain criteria. The workflow is triggered by adding a new row in a Google Sheets document. The workflow uses the OpenAI node to process the lead information. The system query contains detailed qualification rules and the response format. The user message contains the data for the individual lead. The JSON response from the OpenAI node is then processed by the Edit Fields node to extract the response. This response is merged together with the original lead data by the Merge node. Finally, the Google Sheets node updates the original lead entry in the Google Sheets document with the qualification result ("qualified" or "not qualified") in a separate column. This allows for easy tracking and sorting of the qualified leads.
WhatsApp Starter Workflow
This n8n workflow is designed for working with the WhatsApp Business platform. It allows to send custom replies via WhatsApp in response to incoming user messages. 💡 Take a look at the step-by-step tutorial on how to create a WhatsApp bot. The workflow consists of two parts: The first Verify webhook sends back verification challenge string. You will need this part during the setup process on the Meta for Developers portal: Select your App Go to WhatsApp Configuration Click on the Edit button in the Webhook session Enter your production webhook URL, provide Verify token (can be any text string) Remember to activate the n8n workflow! Finally press "Verify and save" Once the webhook is verified, the Respond webhook receives various POST requests from Meta regarding WhatsApp messages (user messages and status notifications). The workflow checks whether the incoming JSON contains a user message. If this is the case, it sends the text message back to the user. This is a custom message, not a WhatsApp Business template.
OpenAI Assistant workflow: upload file, create an Assistant, chat with it!
This is an end-to-end workflow for creating a simple OpenAI Assistant. The whole process is done with n8n nodes and do not require any programming experience. The workflow is divided into three main steps: Step 1: Get a Google Drive File and Upload to OpenAI The workflow starts by retrieving a file from Google Drive using the "Get File" node. The example file used is a Music Festival document. The retrieved file is then uploaded to OpenAI using the "Upload File to OpenAI" node. Run this section only once. The file is stored persistently on the OpenAI side. Step 2: Set Up a New Assistant In this step, a new assistant is created using the "Create new Assistant" node. The assistant is given a name, description, and system prompt. The uploaded file from Step 1 is attached as a knowledge source for the assistant. Same as for Step 1, run this section only once. Step 3: Chat with the Assistant The "Chat Trigger" node initiates the conversation with the assistant. The "OpenAI Assistant" node handles the conversation, using the assistant created in Step 2. Step 4: Expand the Assistant This step provides resources for ideas on how to expand the Assistant's capabilities: Create a WhatsApp bot Create a simple Telegram bot Create a Telegram AI bot (YouTube video) By following this workflow, users can create their own AI-powered assistants using OpenAI's API and integrate them with various platforms like WhatsApp and Telegram.
Get CSV from URL and convert to Excel
This workflow demonstrates the conversion of a CSV file to Excel format. First, an example CSV file is downloaded via a direct link. The source file is taken from the European Open Data Portal: https://data.europa.eu/data/datasets/veranstaltungsplaetze-potsdam-potsdam?locale=en The binary data is then imported via the Spreadsheet File node and converted to Excel format. N.B. Note that as of version 1.23.0 n8n, the Spreadsheet File node has been redesigned and is now called Convert to File node. Learn more on the release notes page: https://docs.n8n.io/release-notes/n8n1230
Summarize Google Sheets form feedback via OpenAI's GPT-4
This n8n workflow was developed to collect and summarize feedback from an event that was collected via a Google Form and saved in a Google Sheets document. The workflow is triggered manually by clicking on the "Test workflow" button. The Google Sheets node retrieves the responses from the feedback form. The Aggregate node then combines all responses for each question into arrays and prepares the data for analysis. The OpenAI node processes the aggregated feedback data. System Prompt instructs the model to analyze the responses and generate a summary report that includes the overall sentiment regarding the event and constructive suggestions for improvement. The Markdown node converts the summary report, which is in Markdown format, into HTML. Finally, the Gmail node sends an HTML-formatted email to the specified email address.
Extract personal data with self-hosted LLM Mistral NeMo
This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally. 📖 For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs. 🔑 Key Features Local LLM Connect Ollama to run Mistral NeMo LLM locally Provide a foundation for compliant data processing, keeping sensitive information on-premises Data extraction Convert unstructured text to a consistent JSON format Adjust the JSON schema to meet your specific data extraction needs. Error handling Implement auto-fixing for LLM outputs Include error output for further processing ⚙️ Setup and сonfiguration Prerequisites n8n AI Starter Kit installed Configuration steps Add the Basic LLM Chain node with system prompts. Set up the Ollama Chat Model with optimized parameters. Define the JSON schema in the Structured Output Parser node. 🔍 Further resources -------------------- Run LLMs locally with n8n Video tutorial on using local AI with n8n Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!
Discover hidden website API endpoints using regex and AI
💡 What it is for This workflow helps to automatically discover undocumented API endpoints by analysing JavaScript files from the website's HTML code. When building automation for platforms without public APIs, we face a significant technical barrier. In a perfect world, every service would offer well-documented APIs with clear endpoints and authentication methods. But the reality is different. Before we resort to complex web scraping, let's analyse the architecture of the platform and check whether it makes internal API calls. We will examine JavaScript files embedded in the HTML source code to find and extract potential API endpoints. ⚙️Key Features To discover hidden API endpoints, we can apply two major approaches: Predefined regex extraction: manually insert a fixed regex with the necessary conditions to extract endpoints. Unlike LLM, which creates a custom regex for each JS file, we provide a generic expression to capture all URL strings. We do not want to accidentally miss important API endpoints. AI-supported extraction: ask LLMs to examine the structure of the JavaScript code. The 1st model will: capture potential API endpoints create a detailed description of each identified endpoint with methods and query parameters the 2nd LLM connected to the AI Agent will generate a regex for each JS file individually based on the output of the 1st model. In addition to pure endpoint extraction, we supplement our analysis with: AI regex validation: the AI Agent calls a validation tool to iteratively improve its regex based on the reference data. Results comparison: side-by-side analysis of API endpoints extracted with a predefined regex against AI-supported results. ✅Requirements: OpenRouter API access: for AI-powered analysis (Gemini + Claude models by default) Minimal setup: simply configure the target URL and run Platforms: JS files must be accessible and have embedded standard API endpoints patterns (/api/, /v1/, etc.) 💪Use Cases 📚 API documentation: create complete endpoint descriptions for internal APIs 🚀 Automation & integration projects: find the APIs you need when official documentation is missing 🛠 Web scraping projects: discover data access patterns 🔍 Security research: map attack surfaces and explore unprotected endpoints 🎉Extracted the endpoints, what now? To execute API requests, we often need additional information such as query parameters or JSON body data: One way to find out exactly how the request is being made on the platform is to navigate to the Network tab in the Dev Tools console while interacting with the platform. Look for anything that resembles API requests and review the request/response headers, payload and query parameters. Alternatively, you can also check the JS file and the page source code for the required values. ✨Inspiration As a guitarist who also builds workflows, I wanted to automate communication with the booking platform I use in my music project. While trying to connect to the platform from n8n, I ran into a challenge: no public APIs. Fortunately, I found out that the platform I work with was built as a modern web app with client-side JavaScript that contained information about the API structure. This led me to the topic of hidden API endpoints and eventually to this workflow. It is part of my music booking project which I presented at the n8n Community Meetup in Berlin on 22 May 2025.
Build a multi-agent system with n8n, Qdrant, Gmail & OpenAI
This template presents a multi-agent system in which a coordinating agent manages specialized sub-agents: an AI agent for RAG and document summarization, and an email agent. Each agent effectively operates in its own domain, working collaboratively under the management of the primary agent. In addition to the two sub-agents, the coordinator agent queries the latest news by calling the HTTPS Request Tool. 💡 This template is an extended version of the initial workflow on how to Build a RAG Agent with n8n, Qdrant & OpenAI. The RAG sub-agent can use the same Qdrant collection. You can import this example collection (n8n-rag-2437367325990310-2025-11-04-10-41-54.snapshot) of 3 documents into the free Qdrant cloud or self-hosted account, rather than creating it from scratch. 🔗Example files for RAG The template uses the following example files in the Google Docs format: German Data Protection law: Bundesdatenschutzgesetz (BDSG) Computer Security Incident Handling Guide (NIST.SP.800-61r2) Berkshire Hathaway letter to shareholders from 2024 🚀How to Get Started Copy or import the template to your n8n instance. Create your Google Drive credentials via the Google Cloud Console and add them to the "Get Document" node. A detailed walk-through can be found in the n8n docs. Create your Gmail credentials via the Google Cloud Console and add them to the Gmail nodes. Create a Qdrant API key and add it to the "Search Documents" node credentials. The API key will be displayed after you have logged into Qdrant and created a Cluster. Create or activate your OpenAI API key. Create or activate your OpenRouter API key. Create or activate your News API key. 💬Chat with the main Agent to query document data, search latest news or perform Email actions 1️⃣ Ask the agent about specific information, facts, quotes, or details that are stored in the uploaded documents. E.g. What should be documented during incident response? 2️⃣ Ask the agent about recent news and current information from web sources. E.g. What does BDSG say about data breaches and are there any recent cases? 3️⃣ Ask the agent to summarize the document or information related to the documents and email it to you. E.g.I need a short summary of the Berkshire Hathaway letter, please send it to my email [user@example.com]. 4️⃣ Aks the agent to update you on your recent emails. E.g. I’d like to know the content of the latest email from [username]. 5️⃣ Ask the agent to create a draft of the email. E.g. Please create an email draft of the [document] summary. 🌟Adapt this template for your own use case Enterprise workflows - Google Docs processing with automated communications Research teams - Document analysis with automatic report distribution Customer success - Intelligent document search with follow-up email automation Content operations - Document summarization with stakeholder notifications Compliance workflows - Policy queries with automated alert systems ⚠️ The current multi-agent architecture comes with certain trade-offs: the sequential nature of agent hand-offs can increase latency compared to single calls, and the full conversation history is not shared between all sub-agents. 💻 📞Get in touch if you want to customise this workflow or have any questions.
Build a RAG agent with n8n, Qdrant & OpenAI
This template helps you to create an intelligent document assistant that can answer questions from uploaded files. It shows a complete single-vector RAG (Retrieval-Augmented Generation) system that automatically processes documents, lets you chat with it in natural language and provides accurate, source-cited responses. The workflow consists of two parts: the data loading pipeline and RAG AI Agent that answers your questions based on the uploaded documents. To test tis workflow, you can use the following example files in a shared Google Drive folder. 💡 Find more information on creating RAG AI agents in n8n on the official page. 🔗Example files The template uses the following example files in the Google Docs format: German Data Protection law: Bundesdatenschutzgesetz (BDSG) Computer Security Incident Handling Guide (NIST.SP.800-61r2) Berkshire Hathaway letter to shareholders from 2024 🚀How to get started Copy or import the template to your n8n instance. Create your Google Drive credentials via the Google Cloud Console and add them to the trigger node "Detect New Files". A detailed walk-through can be found in the n8n docs. Create a Qdrant API key and add it to the "Insert into Vector Store" node credentials. The API key will be displayed after you have logged into Qdrant and created a Cluster. Create or activate your OpenAI API key. 1️⃣ Import your data and store it in a vector database ✅ Upload files to Google Drive. IMPORTANT: This template supports files in Google Docs format. New files will be downloaded in HTML format and converted to Markdown. This preserves the overall document structure and improves the quality of responses. Open the shared Google Drive folder Create a new folder on your Google Drive Activate the workflow Copy the files from the shared folder to your new folder The webhook will catch the added files and you will see the execution in your "Executions" tab. Note: If the webhook doesn’t see the files you copied, try adding them to your Google Drive folder from the opened shared files via the Move to feature. ✅ Chunk, embed, and store your data with a connected OpenAI embedding model and Qdrant vector store. A Qdrant collection – vector storage for your data – will be created automatically after the n8n webhook has caught your data from Google Drive. You can name your collection in the "Insert into Vector Store" node. 2️⃣ Add retrieval capabilities and chat with your data ✅ Select the database with imported data in the “Search Documents” sub-node of an AI Agent. ✅ Start a chat with your agent via the chat interface: it will retrieve data from the vector store and provide a response. ❓You can ask the following questions based on the example files to test this workflow: What are the main steps in incident handling? What does Warren Buffett say about mistakes at Berkshire? What are the requirements for processing personal data? Do any documents mention data breach notification? 🌟Adapt the workflow to your own use case Knowledge management - Query company docs, policies, and procedures Research assistance - Search through academic papers and reports Customer support - Build agents that reference product documentation Legal/compliance - Query contracts, regulations, and legal documents Personal productivity - Chat with your notes, articles, and saved content The workflow automatically detects new files, processes them into searchable vector chunks, and maintains conversation context. Just drop files in your Google Drive folder and start asking questions. 💻 📞Get in touch with me if you want to customise this workflow or have any questions.