3 templates found
Category:
Author:
Sort:

Build a PDF-based RAG system with OpenAI, Pinecone and Cohere reranking

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow provides a complete, ready-to-use template for a Retrieval-Augmented Generation (RAG) system. It allows you to build a powerful AI chatbot that can answer questions based on the content of PDF documents you provide, using a modern and powerful stack for optimal performance. Good to know Costs: This workflow uses paid services (OpenAI, Pinecone, Cohere). Costs will be incurred based on your usage. Please review the pricing pages for each service to understand the potential expenses. Video Tutorial (Bahasa Indonesia): For a step-by-step guide on how this workflow functions, you can watch the accompanying video tutorial here: N8N Tutorial: Membangun Chatbot RAG dengan Pinecone, OpenAI, & Cohere How it works This workflow operates in two distinct stages: Data Ingestion & Indexing: It begins when a .pdf file is uploaded via the n8n Form Trigger. The Default Data Loader node processes the PDF, and the Recursive Character Text Splitter breaks down the content into smaller, manageable chunks. The Embeddings OpenAI node converts these text chunks into vector embeddings (numerical representations). Finally, the Pinecone Vector Store node takes these embeddings and stores (upserts) them into your specified Pinecone index, creating a searchable knowledge base. Conversational AI Agent: A user sends a message through the Chat Trigger. The AI Agent receives the message and uses its VectorDB tool to search the Pinecone index for relevant information. The Reranker Cohere node refines these search results, ensuring only the most relevant context is selected. The user's original question and the refined context are sent to the OpenAI Chat Model (gpt-4.1), which generates a helpful, context-aware answer. The Simple Memory node maintains conversation history, allowing for natural, multi-turn dialogues. How to use Using this workflow is a two-step process: Populate the Knowledge Base: First, you need to add documents. Trigger the workflow by using the Form Trigger and uploading a PDF file. Wait for the execution to complete. You can do this for multiple documents. Start Chatting: Once your data has been ingested, open the Chat Trigger's interface and start asking questions related to the content of your uploaded documents. The Form Trigger is just an example. Feel free to replace it with other triggers, such as a node that watches a Google Drive or Dropbox folder for new files. Requirements To run this workflow, you will need active accounts and API keys for the following services. OpenAI Account & API Key: Function: Powers text embedding and the final chat generation. Required for the Embeddings OpenAI and OpenAI Chat Model nodes. Pinecone Account & API Key: Function: Used to store and retrieve your vector knowledge base. Required for the Pinecone Vector Store and VectorDB nodes. You also need to provide your Pinecone Environment. Cohere Account & API Key: Function: Improves the accuracy of your chatbot by re-ranking search results for relevance. Required for the Reranker Cohere node. Customising this workflow This template is a great starting point. Here are a few ways you can customize it: Change the AI Personality: Edit the System Message in the AI Agent node to change the bot's behavior, tone, or instructions. Use Different Models: You can easily swap the OpenAI model for another one (e.g., gpt-3.5-turbo for lower costs) in the OpenAI Chat Model node. Adjust Retrieval: In the VectorDB tool node, you can modify the Top K parameter to retrieve more or fewer document chunks to use as context. Automate Ingestion: Replace the manual Form Trigger with an automated one, like a node that triggers whenever a new file is added to a specific cloud storage folder.

Aji PrakosoBy Aji Prakoso
8296

Transform images with AI image editor using FLUX.1 Kontext

For Who? Content Creators Youtube Automation Marketing Team --- How it works? 1 - Retrieve Base Image, Image Description and Situation from Airtable 2 - Generate Image Prompt 3 - Generate Image via Fal AI 4 - Verify if Image is generated 5 - Upload Image on Airtable 📺 YouTube Video Tutorial: [](https://www.youtube.com/watch?v=0SVj70-dA0Q) --- SETUP Setup Input : The first part of the workflow can be replaced with anything else. You need as input a Prompt and the Base Image URL (publicly available). Setup Output : In this Workflow, the output is storing the image on Airtable but you can replace that with anything else but basically you have two options : Store the Generated Image somewhere : Keep everything like this and replace the last Airtable node with the Third Party you want to use. Use the Image directly in n8n : In HTTP Request "Generate Image" switch sync_mode to "true", remove all the following nodes and add "Extract form File" node (convert to Base64 String) APIs : For the following third-party integrations, replace ==[YOURAPITOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance: Fal AI (FLUX KONTEXT MAX) : https://fal.ai/models/fal-ai/flux-pro/kontext/max/apischema-input Airtable : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.airtable/?utmsource=n8napp&utmmedium=nodesettingsmodal-credentiallink&utm_campaign=n8n-nodes-base.airtable

NasserBy Nasser
896

Generate & publish AI videos with Sora 2, Veo 3.1, Gemini & Blotato

Overview This workflow automatically generates short-form AI videos using both OpenAI Sora 2 Pro and Google Veo 3.1, enhances your idea with Google Gemini, and publishes content across multiple platforms through Blotato. It’s perfect for creators, brands, UGC teams, and anyone building a high-frequency AI video pipeline. You can turn a single text idea into fully rendered videos, compare outputs from multiple AI models, and publish everywhere in one automated flow. --- Good to know Generating Sora or Veo videos may incur API costs depending on your provider. Video rendering time varies by prompt complexity. Sora & Veo availability depends on region and account access. Blotato must be connected to your social accounts before publishing. The workflow includes toggles so you can turn Sora, Veo, or platforms on/off easily. --- How it works Your text idea enters through the Chat Trigger. Google Gemini rewrites your idea into a detailed, high-quality video prompt. The workflow splits into two branches: Sora Branch: Generates video via OpenAI Sora 2 Pro, downloads the MP4, and uploads/publishes to YouTube, TikTok, and Instagram. Veo Branch: Generates a video using Google Veo 3.1 (via Wavespeed), retrieves the output link, emails it to you, and optionally uploads it to Blotato for publishing. A Config – Toggles node lets you enable or disable models and platforms. Optional Google Sheets logging can store video history and metadata. --- How to use Send a message to the Chat Trigger to start the workflow. Adjust toggles to choose whether you want Sora, Veo, or both. Add or remove publishing platforms inside the Blotato nodes. Check your email for Veo results or monitor uploads on your social accounts. Ideal for automation, batch content creation, and AI-powered video workflows. --- Requirements Google Gemini API key (for prompt enhancement) OpenAI Sora 2 API key Wavespeed (Veo 3.1) API key Blotato account + connected YouTube/TikTok/Instagram channels Gmail OAuth2 (for sending video result emails) Google Sheets (optional logging) --- Customizing this workflow Add a title/description generator for YouTube Shorts. Insert a thumbnail generator (image AI model). Extend logging with Sheets or a database. Add additional platforms supported by Blotato. Use different prompt strategies for cinematic, viral, or niche content styles.

Amit KumarBy Amit Kumar
837
All templates loaded