Back to Catalog

Templates by Guillaume Duvernay

Create a speech-to-text API with OpenAI GPT4o-mini transcribe

Description This template provides a simple and powerful backend for adding speech-to-text capabilities to any application. It creates a dedicated webhook that receives an audio file, transcribes it using OpenAI's gpt-4o-mini model, and returns the clean text. To help you get started immediately, you'll find a complete, ready-to-use HTML code example right inside the workflow in a sticky note. This code creates a functional recording interface you can use for testing or as a foundation for your own design. Who is this for? Developers: Quickly add a transcription feature to your application by calling this webhook from your existing frontend or backend code. No-code/Low-code builders: Embed a functional audio recorder and transcription service into your projects by using the example code found inside the workflow. API enthusiasts: A lean, practical example of how to use n8n to wrap a service like OpenAI into your own secure and scalable API endpoint. What problem does this solve? Provides a ready-made API: Instantly gives you a secure webhook to handle audio file uploads and transcription processing without any server setup. Decouples frontend from backend: Your application only needs to know about one simple webhook URL, allowing you to change the backend logic in n8n without touching your app's code. Offers a clear implementation pattern: The included example code provides a working demonstration of how to send an audio file from a browser and handle the response—a pattern you can replicate in any framework. How it works This solution works by defining a clear API contract between your application (the client) and the n8n workflow (the backend). The client-side technique: Your application's interface records or selects an audio file. It then makes a POST request to the n8n webhook URL, sending the audio file as multipart/form-data. It waits for the response from the webhook, parses the JSON body, and extracts the value of the Transcript key. You can see this exact pattern in action in the example code provided in the workflow's sticky note. The n8n workflow (backend): The Webhook node catches the incoming POST request and grabs the audio file. The HTTP Request node sends this file to the OpenAI API. The Set node isolates the transcript text from the API's response. The Respond to Webhook node sends a clean JSON object ({"Transcript": "your text here..."}) back to your application. Setup Configure the n8n workflow: In the Transcribe with OpenAI node, add your OpenAI API credentials. Activate the workflow to enable the endpoint. Click the "Copy" button on the Webhook node to get your unique Production Webhook URL. Integrate with the frontend: Inside the workflow, find the sticky note labeled "Example Frontend Code Below". Copy the complete HTML from the note below it. ⚠️ Important: In the code you just copied, find the line const WEBHOOK_URL = 'YOUR WEBHOOK URL'; and replace the placeholder with the Production Webhook URL from n8n. Save the code as an HTML file and open it in your browser to test. Taking it further Save transcripts: Add an Airtable or Google Sheets node to log every transcript that comes through the workflow. Error handling: Enhance the workflow to catch potential errors from the OpenAI API and respond with a clear error message. Analyze the transcript: Add a Language Model node after the transcription step to summarize the text, classify its sentiment, or extract key entities before sending the response.

Guillaume DuvernayBy Guillaume Duvernay
3972

Create multi-step reasoning AI agents with GPT-4 and reusable thinking tools

Unlock a new level of sophistication for your AI agents with this template. While the native n8n Think Tool is great for giving an agent an internal monologue, it's limited to one instance. This workflow provides a clever solution using a sub-workflow to create multiple, custom thinking tools, each with its own specific purpose. This template provides the foundation for building agents that can plan, act, and then reflect on their actions before proceeding. Instead of just reacting, your agent can now follow a structured, multi-step reasoning process that you design, leading to more reliable and powerful automations. Who is this for? AI and automation developers: Anyone looking to build complex, multi-tool agents that require robust logic and planning capabilities. LangChain enthusiasts: Users familiar with advanced agent concepts like ReAct (Reason-Act) will find this a practical way to implement similar frameworks in n8n. Problem solvers: If your current agent struggles with complex tasks, giving it distinct steps for planning and reflection can dramatically improve its performance. What problem does this solve? Bypasses the single "Think Tool" limit: The core of this template is a technique that allows you to add as many distinct thinking steps to your agent as you need. Enables complex reasoning: You can design a structured thought process for your agent, such as "Plan the entire process," "Execute Step 1," and "Reflect on the result," making it behave more intelligently. Improves agent reliability and debugging: By forcing the agent to write down its thoughts at different stages, you can easily see its line of reasoning, making it less prone to errors and much easier to debug when things go wrong. Provides a blueprint for sophisticated AI: This is not just a simple tool; it's a foundational framework for building state-of-the-art AI agents that can handle more nuanced and multi-step tasks. How it works The re-usable "Thinking Space": The magic of this template is a simple sub-workflow that does nothing but receive text. This workflow acts as a reusable "scratchpad." Creating custom thinking tools: In the main workflow, we use the Tool (Workflow) node to call this "scratchpad" sub-workflow multiple times. We give each of these tools a unique name (e.g., Initial thoughts, Additional thoughts). The power of descriptions: The key is the description you give each of these tool nodes. This description tells the agent when and how it should use that specific thinking step. For example, the Initial thoughts tool is described as the place to create a plan at the start of a task. Orchestration via system prompt: The main AI Agent's system prompt acts as the conductor, instructing the agent on the overall process and telling it about its new thinking abilities (e.g., "Always start by using the Initial thoughts tool to make a plan..."). A practical example: This template includes two thinking tools to demonstrate a "Plan and Reflect" cycle, but you can add many more to fit your needs. Setup Add your own "action" tools: This template provides the thinking framework. To make it useful, you need to give the agent something to do. Add your own tools to the AI Agent, such as a web search tool, a database lookup, or an API call. Customize the thinking tools: Edit the description of the existing Initial thoughts and Additional thoughts tools. Make them relevant to the new action tools you've added. For example, "Plan which of the web search or database tools to use." Update the agent's brain: Modify the system prompt in the main AI Agent node. Tell it about the new action tools you've added and how it should use your customized thinking tools to complete its tasks. Connect your AI model: Select the OpenAI Chat Model node and add your credentials. Taking it further Create more granular thinking steps: Add more thinking tools for different stages of a process, like a "Hypothesize a solution" tool, a "Verify assumptions" tool, or a "Final answer check" tool. Customize the thought process: You can change how* the agent thinks by editing the prompt inside the fromAI('Thoughts', ...) field within each tool. You could ask for thoughts in a specific format, like bullet points or a JSON object. Change the workflow trigger: Switch the chat trigger for a Telegram trigger, email, Slack, whatever you need for your use case! Integrate with memory: For even more power, combine this framework with a long-term memory solution, allowing the agent to reflect on its thoughts from past conversations.

Guillaume DuvernayBy Guillaume Duvernay
2137

Build an advanced multi-query RAG system with Supabase and GPT-5

Go beyond basic Retrieval-Augmented Generation (RAG) with this advanced template. While a simple RAG setup can answer straightforward questions, it often fails when faced with complex queries and can be polluted by irrelevant information. This workflow introduces a sophisticated architecture that empowers your AI agent to think and act like a true research assistant. By decoupling the agent from the knowledge base with a smart sub-workflow, this template enables multi-query decomposition, relevance-based filtering, and an intermediate reasoning step. The result is an AI agent that can handle complex questions, filter out noise, and synthesize high-quality, comprehensive answers based on your data in Supabase. Who is this for? AI and automation developers: Anyone building sophisticated Q&A bots, internal knowledge base assistants, or complex research agents. n8n power users: Users looking to push the boundaries of AI agents in n8n by implementing production-ready, robust architectural patterns. Anyone building a RAG system: This provides a superior architectural pattern that overcomes the common limitations of basic RAG setups, leading to dramatically better performance. What problem does this solve? Handles complex questions: A standard RAG agent sends one query and gets one set of results. This agent is designed to break down a complex question like "How does natural selection work at the molecular, organismal, and population levels?" into multiple, targeted sub-queries, ensuring all facets of the question are answered. Prevents low-quality answers: A simple RAG agent can be fed irrelevant information if the semantic search returns low-quality matches. This workflow includes a crucial relevance filtering step, discarding any data chunks that fall below a set similarity score, ensuring the agent only reasons with high-quality context. Improves answer quality and coherence: By introducing a dedicated "Think" tool, the agent has a private scratchpad to synthesize the information it has gathered from multiple queries. This intermediate reasoning step allows it to connect the dots and structure a more comprehensive and logical final answer. Gives you more control and flexibility: By using a sub-workflow to handle data retrieval, you can add any custom logic you need (like filtering, formatting, or even calling other APIs) without complicating the main agent's design. How it works This template consists of a main agent workflow and a smart sub-workflow that handles knowledge retrieval. Multi-query decomposition: When you ask the AI Agent a complex question, its system prompt instructs it to first break it down into an array of multiple, simpler sub-queries. Decoupling with a sub-workflow: The agent doesn't have direct access to the vector store. Instead, it calls a "Query knowledge base" tool, which is a sub-workflow. It sends the entire array of sub-queries to this sub-workflow in a single tool call. Iterative retrieval & filtering (in the sub-workflow): The sub-workflow loops through each sub-query. For each one, it queries your Supabase Vector Store. It then checks the similarity score of the returned data chunks and uses a Filter node to discard any that are not highly relevant (the default is a score > 0.4). Intermediate reasoning step: The sub-workflow returns all the high-quality, filtered information to the main agent. The agent is then instructed to use its Think tool to review this information, synthesize the key points, and structure a plan for its final, comprehensive answer. Setup Connect your accounts: Supabase: In the sub-workflow ("RAG sub-workflow"), connect your Supabase account to the Supabase Vector Store node and select your table. OpenAI: Connect your OpenAI account in two places: to the Embeddings OpenAI node (in the sub-workflow) and to the OpenAI Chat Model node (in the main workflow). Customize the agent's purpose: In the main workflow, edit the AI Agent's system prompt. Change the context from a "biology course" to whatever your knowledge base is about. Adjust the relevance filter: In the sub-workflow, you can change the 0.4 threshold in the Filter node to be more or less strict about the quality of the information you want the agent to use. Activate the workflow and start asking complex questions! Taking it further Integrate different vector stores: The logic is decoupled. You can easily swap the Supabase Vector Store node in the sub-workflow with a Pinecone, Weaviate, or any other vector store node without changing the main agent's logic. Add more tools: Give the main agent other capabilities, like a web search a way to interact with your tech stack. The agent can then decide whether to use its internal knowledge base, search the web, or both, to answer a question. Better prompting: You could further work on the Agent's system prompt to increase its capacity to provide high-quality answers by being even better at leveraging the provided chunks.

Guillaume DuvernayBy Guillaume Duvernay
1654

Sales prospect research & outreach preparation with Apollo, Linkup AI, and LinkedIn

This template transforms your sales and outreach process by automating deep, personalized research on any contact. Go beyond simple data enrichment; this workflow acts as an AI research assistant. Starting with just a name and company, it finds the person's professional profile, analyzes it through the lens of your specific business offering, and returns actionable insights to prepare for the perfect outreach. Stop spending hours manually researching prospects. With this template, you get a synthesized report in seconds, highlighting a contact's potential pain points and exactly how your solution can provide value, setting the stage for more meaningful and effective conversations. Who is this for? Sales Development & Business Development Reps (SDRs/BDRs): Drastically cut down on research time and increase the quality and personalization of your outreach efforts. Account Executives: Prepare for meetings with a deep, relevant understanding of a prospect's background and potential needs. Founders & Solopreneurs: Handle your own sales and lead generation efficiently by automating the research phase. Marketing Teams: Power your Account-Based Marketing (ABM) campaigns with tailored insights for key accounts. What problem does this solve? Eliminates time-consuming manual research: Automates the entire process of finding a person, reading their profile, and connecting the dots back to your business. Prevents generic outreach: Provides you with specific, synthesized talking points, moving you beyond "I saw your profile on LinkedIn" to a message that shows you've done your homework. Solves "writer's block": Delivers a clear summary of a prospect's potential challenges and how you can help, making it much easier to start writing a compelling message. Creates actionable intelligence, not just data: Instead of just returning a list of job titles and skills, it synthesizes that information into strategic summaries ready to be used. How it works Input contact details: The workflow is triggered by a form where you enter the first name, last name, and company of the person you want to research. Find the person with Apollo: The workflow uses the Apollo.io API to find the contact's professional data, including their verified LinkedIn profile URL. Define your business context: This is the "smart" part. The workflow injects information you provide about your offering and the typical pain points your customers face. Analyze profile with Linkup: Using the Linkup API, the workflow reads the person's public LinkedIn profile. Crucially, it analyzes the profile through the lens of your business context. Get synthesized insights: Linkup's AI returns three structured summaries: a general overview of the person, their potential pain points relative to your business, and a concise explanation of how your offering could bring them value. Consolidate results: The final node gathers all the enriched data and AI-generated summaries into a single, clean output, ready for your CRM or next action. Setup Define your business context (Critical Step): This is the most important part. In the Define our business context node, fill in the two fields: Area for which the prospect could experience pain points: Describe the general problems your customers face. My offering: Briefly describe your product or service. This context is what makes the AI analysis relevant to you. Connect your accounts: Apollo: Add your Apollo API key to the Enrich contact with Apollo HTTP node. Linkup: Add your Linkup API key to the Find Linkedin profile information with Linkup HTTP node. Their free plan offers €5 of credits, enough for ~1,000 runs. Activate the workflow: Toggle the workflow to "Active". You can now run it by filling out the form trigger! Taking it further Automate CRM enrichment: Connect the final Consolidate results node to a HubSpot, Attio, or Salesforce node to automatically save these rich insights to your contact records. Generate AI-powered outreach: Add an OpenAI node after this workflow to take the synthesized insights and generate a first draft of a personalized outreach email or LinkedIn message. Process leads in bulk: Replace the Form Trigger with a Google Sheets or Airtable trigger to run this enrichment process for an entire list of new leads automatically.

Guillaume DuvernayBy Guillaume Duvernay
1086

AI-powered news monitoring with Linkup, Airtable, and Slack notifications

This template provides a fully automated system for monitoring news on any topic you choose. It leverages Linkup's AI-powered web search to find recent, relevant articles, extracts key information like the title, date, and summary, and then neatly organizes everything in an Airtable base. Stop manually searching for updates and let this workflow deliver a curated news digest directly to your own database, complete with a Slack notification to let you know when it's done. This is the perfect solution for staying informed without the repetitive work. Who is this for? Marketing & PR professionals: Keep a close eye on industry trends, competitor mentions, and brand sentiment. Analysts & researchers: Effortlessly gather source material and data points on specific research topics. Business owners & entrepreneurs: Stay updated on market shifts, new technologies, and potential opportunities without dedicating hours to reading. Anyone with a passion project: Easily follow developments in your favorite hobby, field of study, or area of interest. What problem does this solve? Eliminates manual searching: Frees you from the daily or weekly grind of searching multiple news sites for relevant articles. Centralizes information: Consolidates all relevant news into a single, organized, and easily accessible Airtable database. Provides structured data: Instead of just a list of links, it extracts and formats key information (title, summary, URL, date) for each article, ready for review or analysis. Keeps you proactively informed: The automated Slack notification ensures you know exactly when new information is ready, closing the loop on your monitoring process. How it works Schedule: The workflow runs automatically based on a schedule you set (the default is weekly). Define topics: In the Set news parameters node, you specify the topics you want to monitor and the time frame (e.g., news from the last 7 days). AI web search: The Query Linkup for news node sends your topics to Linkup's API. Linkup's AI searches the web for relevant news articles and returns a structured list containing each article's title, URL, summary, and publication date. Store in Airtable: The workflow loops through each article found and creates a new record for it in your Airtable base. Notify on Slack: Once all the news has been stored, a final notification is sent to a Slack channel of your choice, letting you know the process is complete and how many articles were found. Setup Configure the trigger: Adjust the Schedule Trigger node to set the frequency and time you want the workflow to run. Set your topics: In the Set news parameters node, replace the example topics with your own keywords and define the news freshness that you'd like to set. Connect your accounts: Linkup: Add your Linkup API key in the Query Linkup for news node. Linkup's free plan includes €5 of credits monthly, enough for about 1,000 runs of this workflow. Airtable: In the Store one news node, select your Airtable account, then choose the Base and Table where you want to save the news. Slack: In the Notify in Slack node, select your Slack account and the channel where you want to receive notifications. Activate the workflow: Toggle the workflow to "Active", and your automated news monitoring system is live! Taking it further Change your database: Don't use Airtable? Easily swap the Airtable node for a Notion, Google Sheets, or any other database node to store your news. Customize notifications: Replace the Slack node with a Discord, Telegram, or Email node to get alerts on your preferred platform. Add AI analysis: Insert an AI node after the Linkup search to perform sentiment analysis on the news summaries, categorize articles, or generate a high-level overview before saving them.

Guillaume DuvernayBy Guillaume Duvernay
826

AI DJ: Text-to-Spotify playlist generator with Linkup and GPT4

Stop manually searching for songs and let an AI DJ do the work for you. This template provides a complete, end-to-end system that transforms any text prompt into a ready-to-play Spotify playlist. It combines the creative understanding of a powerful AI Agent with the real-time web knowledge of Linkup to curate perfect, up-to-the-minute playlists for any occasion. The experience is seamless: simply describe the vibe you're looking for in a web form, and the workflow will automatically create the playlist in your Spotify account and redirect you straight to it. Whether you need "upbeat funk for a sunny afternoon" or "moody electronic tracks for late-night coding," your personal AI DJ is ready to deliver. Who is this for? Music lovers: Create hyper-specific playlists for any mood, activity, or niche genre without the hassle of manual searching. DJs & event planners: Quickly generate themed playlists for parties, weddings, or corporate events based on a simple brief. Content creators: Easily create companion playlists for your podcasts, videos, or articles to share with your audience. n8n developers: A powerful example of how to build an AI agent that uses an external web-search tool to accomplish a creative task. What problem does this solve? Creates up-to-date playlists: A standard AI doesn't know about music released yesterday. By using Linkup's live web search, this workflow can find and include the very latest tracks. Automates the entire creation process: It handles everything from understanding a vague prompt (like "songs that feel like a summer road trip") to creating a fully populated Spotify playlist. Saves time and effort: It completely eliminates the tedious task of searching for individual tracks, checking for relevance, and manually adding them to a playlist one by one. Provides a seamless user experience: The workflow begins with a simple form and ends by automatically opening the finished playlist in your browser. There are no intermediate steps for you to manage. How it works Submit your playlist idea: You describe the playlist you want and the desired number of tracks in a simple, Spotify-themed web form. The AI DJ plans the search: An AI Agent (acting as your personal DJ) analyzes your request. It then intelligently formulates a specific query to find the best music. Web research with Linkup: The agent uses its Linkup web-search tool to find artists and tracks from across the web that perfectly match your request, returning a list of high-quality suggestions. The AI DJ curates the list: The agent reviews the search results and finalizes the tracklist and a creative name for your playlist. Build the playlist in Spotify: The workflow takes the agent's final list, creates a new public playlist in your Spotify account, then searches for each individual track to get its ID and adds them all. Instant redirection: As soon as the last track is added, the workflow automatically redirects your browser to the newly created playlist on Spotify, ready to be played. Setup Connect your accounts: You will need to add your credentials for: Spotify: In the Spotify nodes. Linkup: In the Web query to find tracks (HTTP Request Tool) node. Linkup's free plan is very generous! Your AI provider (e.g., OpenAI): In the OpenAI Chat Model node. Activate the workflow: Toggle the workflow to "Active." Use the form: Open the URL from the On form submission trigger and start creating playlists! Taking it further Change the trigger: Instead of a form, trigger the playlist creation from a Telegram message, a Discord bot command, or even a webhook from another application. Create collaborative playlists: Set up a workflow where multiple people can submit song ideas. You could then have a final AI step consolidate all the requests into a single, cohesive prompt to generate the ultimate group playlist. Optimize for speed: The Web query to find tracks node is set to deep search mode for the highest quality results. You can change this to standard mode for faster and cheaper (but potentially less thorough) playlist creation.

Guillaume DuvernayBy Guillaume Duvernay
711

Create fact-based articles from your knowledge sources with Super RAG and GPT-5

Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Super assistant—which you've connected to your own trusted knowledge sources like Notion, Google Drive, or PDFs—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists: Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts: Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies: Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations": By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage: The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation: The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation: It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Super assistant. This assistant, which you have pre-configured and connected to your own knowledge sources (Notion pages, Google Drive folders, PDFs, etc.), finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Implementing the template Set up your Super assistant (Prerequisite): First, go to Super, create an assistant, connect it to your knowledge sources (Notion, Drive, etc.), and copy its Assistant ID and your API Token. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes (GPT 5 mini and GPT 5 chat). In the Query Super Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Super API Token for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing: Connect the final Article result node to a Webflow or WordPress node to automatically create a draft post in your CMS. Generate content in bulk: Replace the Form Trigger with an Airtable or Google Sheet trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style: Tweak the system prompt in the final New content - Generate the AI output node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.

Guillaume DuvernayBy Guillaume Duvernay
493

Build an intelligent Q&A bot with Lookio Knowledge Base and GPT

Build a powerful AI chatbot that provides precise answers from your own company's knowledge base. This template provides a smart AI agent that connects to Lookio, a platform where you can easily upload your documents (from Notion, Jira, Slack, etc.) to create a dedicated knowledge source. What makes this agent "smart" is its efficiency. It's configured to handle simple greetings and small talk on its own, only using its powerful (and paid) knowledge retrieval tool when a user asks a genuine question. This cost-saving logic makes it perfect for building production-ready internal helpdesks, customer support bots, or any application where you need accurate, source-based answers. Who is this for? Customer support teams: Build internal bots that help agents find answers instantly from your support documentation and knowledge bases. Product & engineering teams: Create a chatbot that can answer technical questions based on your product documentation or internal wikis. HR departments: Deploy an internal assistant that can answer employee questions based on company handbooks, policies, and procedures. Any business with a knowledge base: Provide an interactive, conversational way for employees or customers to access information locked away in your documents. What problem does this solve? Provides accurate, grounded answers: Ensures the AI agent's responses are based on your trusted, private documents, not the open internet, which prevents factual errors and "hallucinations." Makes your knowledge accessible: Transforms your static documents and knowledge bases into an interactive, 24/7 conversational resource. Optimizes for cost and efficiency: The agent is intelligent enough to handle simple small talk without making unnecessary API calls to your knowledge base, saving you credits and money. Simplifies RAG setup: Provides a ready-to-use template for a common RAG (Retrieval-Augmented Generation) pattern, with the complexities of document management and retrieval handled by the Lookio platform. How it works First, build your knowledge base in Lookio: The process starts on the Lookio platform. You upload your documents (from Notion, Jira, PDFs, etc.) and create an "assistant" which becomes your secure, queryable knowledge base. A user asks a question: The n8n workflow begins when a user sends a message via the Chat Trigger. The agent makes a decision: The AI Knowledge Agent, guided by its system prompt, analyzes the user's message. If it's a simple greeting like "hi," it will respond directly. If it's a substantive question that requires specific knowledge, it decides to use its "Query knowledge base" tool. Query the Lookio knowledge base: The agent passes the user's question to the HTTP Request Tool. This tool securely calls the Lookio API with your specific Assistant ID and API key. Deliver the fact-based answer: Lookio searches your documents, synthesizes a precise answer, and sends it back to the workflow. The n8n agent then presents this answer to the user in the chat interface. Architectural Approaches to RAG in n8n with Lookio From a workflow perspective, integrating RAG natively in n8n involves orchestrating multiple nodes for data handling, embedding, and vector searches. This method provides high visibility and control over each step. An alternative architectural pattern is to use an external RAG service like Lookio, which consolidates these steps into a single HTTP Request node. This simplifies the workflow's structure by abstracting the multi-stage RAG process into one API endpoint. Setup Set up your Lookio assistant (Prerequisite): First, go to Lookio, sign up (you get 50 free credits), create an assistant with your documents, and from your settings, copy your API Key and Assistant ID. Configure the Lookio tool: In the Query knowledge base (HTTP Request Tool) node: Replace the <your-assistant-id> placeholder with your actual Assistant ID. Replace the <your-lookio-api-key> placeholder with your actual API Key. Connect your AI model: In the OpenAI Chat Model node, connect your AI provider credentials. Activate the workflow. Your smart knowledge base agent is now live and ready to chat! Taking it further Adjust retrieval quality: In the Query knowledge base node, you can change the query_mode from flash (fastest) to deep for higher quality but slightly slower answers, depending on your needs. Add more tools: Enhance your agent by giving it other tools, like a web search for when the internal knowledge base doesn't have an answer, or a calculator for performing computations. Deploy it anywhere: Swap the Chat Trigger for a Slack or Discord trigger to deploy your agent right where your team works.

Guillaume DuvernayBy Guillaume Duvernay
378

Dynamic AI web researcher: From plain text to custom CSV with GPT-4 and Linkup

This template introduces a revolutionary approach to automated web research. Instead of a rigid workflow that can only find one type of information, this system uses a "thinker" and "doer" AI architecture. It dynamically interprets your plain-English research request, designs a custom spreadsheet (CSV) with the perfect columns for your goal, and then deploys a web-scraping AI to fill it out. It's like having an expert research assistant who not only finds the data you need but also builds the perfect container for it on the fly. Whether you're looking for sales leads, competitor data, or market trends, this workflow adapts to your request and delivers a perfectly structured, ready-to-use dataset every time. Who is this for? Sales & marketing teams: Generate targeted lead lists, compile competitor analysis, or gather market intelligence with a simple text prompt. Researchers & analysts: Quickly gather and structure data from the web for any topic without needing to write custom scrapers. Entrepreneurs & business owners: Perform rapid market research to validate ideas, find suppliers, or identify opportunities. Anyone who needs structured data: Transform unstructured, natural language requests into clean, organized spreadsheets. What problem does this solve? Eliminates rigid, single-purpose workflows: This workflow isn't hardcoded to find just one thing. It dynamically adapts its entire research plan and data structure based on your request. Automates the entire research process: It handles everything from understanding the goal and planning the research to executing the web search and structuring the final data. Bridges the gap between questions and data: It translates your high-level goal (e.g., "I need sales leads") into a concrete, structured spreadsheet with all the necessary columns (Company Name, Website, Key Contacts, etc.). Optimizes for cost and efficiency: It intelligently uses a combination of deep-dive and standard web searches from Linkup.so to gather high-quality initial results and then enrich them cost-effectively. How it works (The "Thinker & Doer" Method) The process is cleverly split into two main phases: The "Thinker" (AI Planner): You submit a research request via the built-in form (e.g., "Find 50 US-based fashion companies for a sales outreach campaign"). The first AI node acts as the "thinker." It analyzes your request and determines the optimal structure for your final spreadsheet. It dynamically generates a plan, which includes a discoveryQuery to find the initial list, an enrichmentQuery to get details for each item, and the JSON schemas that define the exact columns for your CSV. The "Doer" (AI Researcher): The rest of the workflow is the "doer," which executes the plan. Discovery: It uses a powerful "deep search" with Linkup.so to execute the discoveryQuery and find the initial list of items (e.g., the 50 fashion companies). Enrichment: It then loops through each item in the list. For each one, it performs a fast and cost-effective "standard search" with Linkup to execute the enrichmentQuery, filling in all the detailed columns defined by the "thinker." Final Output: The workflow consolidates all the enriched data and converts it into a final CSV file, ready for download or further processing. Setup Connect your AI provider: In the OpenAI Chat Model node, add your AI provider's credentials. Connect your Linkup account: In the two Linkup (HTTP Request) nodes, add your Linkup API key (free account at linkup.so). We recommend creating a "Generic Credential" of type "Bearer Token" for this. Linkup offers €5 of free credits monthly, which is enough for 1k standard searches or 100 deep queries. Activate the workflow: Toggle the workflow to "Active." You can now use the form to submit your first research request! Taking it further Add a custom dashboard: Replace the form trigger and final CSV output with a more polished user experience. For example, build a simple web app where users can submit requests and download their completed research files. Make it company-aware: Modify the "thinker" AI's prompt to include context about your company. This will allow it to generate research plans that are automatically tailored to finding leads or data relevant to your specific products and services. Add an AI summary layer: After the CSV is generated, add a final AI node to read the entire file and produce a high-level summary, such as "Here are the top 5 leads to contact first and why," turning the raw data into an instant, actionable report.

Guillaume DuvernayBy Guillaume Duvernay
325

Create fact-based articles from knowledge sources with Lookio and OpenAI GPT

Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists: Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts: Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies: Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations": By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage: The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation: The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation: It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing: Connect the final Article result node to a Webflow or WordPress node to automatically create a draft post in your CMS. Generate content in bulk: Replace the Form Trigger with an Airtable or Google Sheet trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style: Tweak the system prompt in the final New content - Generate the AI output node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.

Guillaume DuvernayBy Guillaume Duvernay
294

Run bulk RAG queries from CSV with Lookio

This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers: Produce draft answers to large question sets using your company docs. Sales & proposal teams: Auto-generate RFP answer drafts informed by internal docs. Support & operations teams: Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders: Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries: Eliminates the manual process of running many individual lookups. Ensures answers are grounded: Responses come from your uploaded documents via Lookio, reducing hallucinations. Produces ready-to-use output: Delivers an enriched CSV with a new Response column for downstream use. Simple UX: Users only need to upload a CSV with a Query column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the apikey header value with your Lookio API Key and update assistantid with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries: Insert error handling and delay nodes to respect API limits for large batches. Improve the speed: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results: Add an Airtable or Google Sheets node to archive questions and responses for audit and reuse. Post-process answers: Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations: Replace the Form Trigger with a Google Drive or Airtable trigger to process CSVs automatically from a folder or table.

Guillaume DuvernayBy Guillaume Duvernay
293

Create dual-source expert articles with internal knowledge and web research using Lookio, Linkup, and GPT-5

Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists: Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts: Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies: Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. --- 💡 What problem does this solve? The Best of Both Worlds: Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations": Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility: Automates the inclusion of source links from both your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage: The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow: Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. --- ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. --- 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! --- 🚀 Taking it further Automate Publishing: Connect the final Article result node to a Webflow or WordPress node to automatically create draft posts in your CMS. Generate Content in Bulk: Replace the Form Trigger with an Airtable or Google Sheet trigger to generate a batch of articles from your content calendar. Customize the Writing Style: Tweak the system prompt in the final New content - Generate the AI output node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.

Guillaume DuvernayBy Guillaume Duvernay
252