⚡AI-powered YouTube video summarization & analysis
-- Disclaimer: This workflow uses a community node and therefore only works for self-hosted n8n users -- Transform YouTube videos into comprehensive summaries and structured analysis instantly. This n8n workflow automatically extracts, processes, and analyzes video transcripts to deliver clear, organized insights without watching the entire video. Time-Saving Features 🚀 Instant Processing Simply provide a YouTube URL and receive a structured summary within seconds, eliminating the need to watch lengthy videos. Perfect for research, learning, or content analysis. 🤖 AI-Powered Analysis Leverages GPT-4o-mini to analyze video transcripts, organizing key concepts and insights into a clear, hierarchical structure with main topics and essential points. Smart Processing Pipeline 📝 Automated Transcript Extraction Supports public YouTube video Handles multiple URL formats Extracts complete video transcripts automatically 🧠 Intelligent Content Organization Breaks down content into main topics Highlights key concepts and terminology Maintains technical accuracy while improving clarity Structures information logically with markdown formatting Perfect For 📚 Researchers & Students Quick comprehension of educational content and lectures without watching entire videos. 💼 Business Professionals Efficient analysis of industry talks, presentations, and training materials. 🎯 Content Creators Rapid research and competitive analysis of video content in your niche. Technical Implementation 🔄 Workflow Components Webhook endpoint for URL submission YouTube API integration for video details Transcript extraction system GPT-4 powered analysis engine Telegram notification system (optional) Transform your video content consumption with an intelligent system that delivers structured, comprehensive summaries while saving hours of viewing time.
Supabase insertion & upsertion & retrieval
This is a demo workflow to showcase how to use Supabase to embed a document, retrieve information from the vector store via chat and update the database. Setup steps: set your credentials for Supabase set your credentials for an AI model of your choice set credentials for any service you want to use to upload documents please follow the guidelines in the workflow itself (Sticky Notes) Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at ria@n8n.io
Build custom workflows automatically with GPT-4o, RAG, and web search
🚀 What the “Agent Builder” template does Need to turn a one-line chat request into a fully-wired n8n workflow template—complete with AI agents, RAG, and web-search super-powers—without lifting a finger? That’s exactly what Agent Builder automates: Listens to any incoming chat message (via the Chat Trigger). Spins up an AI architect that analyses the request, searches the web, reads n8n docs from a Pinecone vector store, and designs the smallest possible set of nodes. Auto-generates a ready-to-import JSON template and hands it back as a downloadable file—plus all the supporting assets (embeddings, vector store etc.) so the next prompt is even smarter. Think of it as your personal “workflow chef”: you shout the order, it shops for ingredients, cooks, plates, and serves the meal. All you do is eat. --- 🤗 Who will love this? No-code builders / power users who don’t want to wrestle with AI node wiring. Agencies & consultants delivering lots of bespoke automations. Internal platform teams who need a “workflow self-service portal” for non-technical colleagues. --- 🧩 How it’s wired | Sub-process | What happens inside | Key nodes | | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------- | | Web Crawler (optional) | Firecrawl scrapes docs.n8n.io (or any URL you drop in) and streams raw markdown back. | Set URL → HTTP Request (Extract) → Wait & Retry | | RAG Trainer | Splits the scraped docs, embeds them with OpenAI, and upserts vectors into Pinecone. | Recursive Text Splitter → Embeddings OpenAI → Train Pinecone | | Agent Builder | The star of the show – orchestrates GPT-4o (via OpenRouter), SerpAPI web-search, your Pinecone index and a Structured Output Parser to produce → validate → prettify the final n8n template. | Chat Trigger → AI Agent → OpenAI (validator) → Code (extract) → Convert to JSON file | Every arrow in the drawn workflow is pre-connected, so the generated template always passes n8n’s import check. --- 🛠️ Getting set up (5 quick creds) | Service | Credential type | | --------------------------------------------------- | ---------------------------------------------------------- | | OpenAI / Azure OpenAI – embeddings & validation | OpenAI API | | Pinecone – vector store | Pinecone API | | OpenRouter – GPT-4o LLM | OpenRouter API Key | | SerpAPI – web search | SerpAPI Key | | Firecrawl (only if you plan to crawl) | Generic Header Auth → Authorization: Bearer YOUR_KEY | Each node already expects those creds; just create them once, select in the dropdown, hit Activate. --- 🏃♀️ What a typical run looks like User says: “Build me a workflow that monitors our support inbox, summarises new tickets with GPT and posts to Slack.” Chat Trigger captures the message. AI Agent: queries Pinecone for relevant n8n docs, fires a SerpAPI search for “n8n gmail trigger example”, sketches an architecture (Gmail Trigger → GPT Model → Slack). The agent returns JSON ➜ OpenAI node double-checks field names, connections, type versions. A tiny JS Code node slices the JSON out of the chat blob and saves it as template.json ready for download. You download, import, and… done. --- ✏️ Customising Switch the LLM – plug in Claude 3, Gemini 1.5, or a local model; just swap the OpenRouter Chat Model* node. Point the RAG at your own docs – change the crawl URL or feed PDFs via the Default Data Loader*. Hard-code preferred nodes – edit the “User node preferences” in the system message so the agent always chooses Notion* for databases, etc. --- 🥡 Take-away notes It's a prototype feel free to experiment with it to improve its capabilities. Have fun building!
Automate customer support issue resolution using AI text classifier
This n8n template is designed to assist and improve customer support team member capacity by automating the resolution of long-lived and forgotten JIRA issues. How it works Schedule Trigger runs daily to check for long-lived unresolved issues and imports them into the workflow. Each Issue is handled as a separate subworkflow by using an execute workflow node. This allows parallel processing. A report is generated from the issue using its comment history allowing the issue to be classified by AI - determining the state and progress of the issue. If determined to be resolved, sentiment analysis is performed to track customer satisfaction. If negative, a slack message is sent to escalate, otherwise the issue is closed automatically. If no response has been initiated, an AI agent will attempt to search and resolve the issue itself using similar resolved issues or from the notion database. If a solution is found, it is posted to the issue and closed. If the issue is blocked and waiting for responses, then a reminder message is added. How to use This template searches for JIRA issues which are older than 7 days which are not in the "Done" status. Ensure there are some issues that meet this criteria otherwise adjust the search query to suit. Works best if you frequently have long-lived issues that need resolving. Ensure the notion tool is configured as to not read documents you didn't intend it to ie. private and/or internal documentation. Requirements JIRA for issues management OpenAI for LLM Slack for notifications Customising this workflow Why not try classifying issues as they are created? One use-case may be for quality control such as ensuring reporting criteria is adhered to, summarising and rephrasing issue for easier reading or adjusting priority.
Intelligent AI digest for security, privacy, and compliance feeds
How it works This workflow acts like your own personal AI assistant, automatically fetching and summarizing the most relevant Security, Privacy, and Compliance news from curated RSS feeds. It processes only the latest articles (past 24 hours), organizes them by category, summarizes key insights using AI, and delivers a clean HTML digest straight to your inbox—saving you time every day. Key Highlights Handles three independent tracks: Security, Privacy, and Compliance Processes content from customizable RSS sources (add/remove easily) Filters fresh articles, removes duplicates, and sorts by recency Uses AI to summarize and format insights in a digestible format Sends polished HTML digests via Gmail—one per category Fully modular and extensible—adapt it to your needs Personalization You can easily tailor the workflow: 🎯 Customize feeds: Add or remove sources in the following Code nodes: Fetch Security RSS, Fetch Privacy Feeds, and Fetch Compliance Feeds 🔧 Modify logic: Adjust filters, sorting, formatting, or even AI prompts as needed 🧠 Bring your own LLM: Works with Gemini, but easily swappable for other LLM APIs Setup Instructions Requires Gmail and LLM (e.g., Gemini) credentials Prebuilt with placeholders for RSS feeds and email output Designed to be readable, maintainable, and fully adaptable
Upsert huge documents in a vector store with Supabase and Notion
Purpose This workflow adds the capability to build a RAG on living data. In this case Notion is used as a Knowledge Base. Whenever a page is updated, the embeddings get upserted in a Supabase Vector Store. It can also be fairly easily adapted to PGVector, Pinecone, or Qdrant by using a custom HTTP request for the latter two. Demo [](https://youtu.be/ELAxebGmspY) How it works A trigger checks every minute for changes in the Notion Database. The manual polling approach improves accuracy and prevents changes from being lost between cached polling intervals. Afterwards every updated page is processed sequentially The Vector Database is searched using the Notion Page ID stored in the metadata of each embedding. If old entries exist, they are deleted. All blocks of the Notion Database Page are retrieved and combined into a single string The content is embedded and split into chunks if necessary. Metadata, including the Notion Page ID, is added during storage for future reference. A simple Question and Answer Chain enables users to ask questions about the embedded content through the integrated chat function Prerequisites To setup a new Vector Store in Supabase, follow this guide Prepare a simple Database in Notion with each Database Page containing at least a title and some content in the blocks section. You can of course also connect this to an existing Database of your choice. Setup Select your credentials in the nodes which require those If you are on an n8n cloud plan, switch to the native Notion Trigger by activating it and deactivating the Schedule Trigger along with its subsequent Notion Node Choose your Notion Database in the first Node related to Notion Adjust the chunk size and overlap in the Token Splitter to your preference Activate the workflow How to use Populate your Notion Database with useful information and use the chat mode of this workflow to ask questions about it. Updates to a Notion Page should quickly reflect in future conversations.
Automate Your RFP Process with OpenAI Assistants
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8nwebhookurl>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
Save email attachments to Nextcloud
This workflow will take all emails you put into a certain folder, upload any attachements to Nextcloud, and mark the emails as read (configurable). Attachements will be saved with automatically generated filenames: 2021-01-01From-Sender-NameFilename-of-attachement.pdf Instructions: Allow lodash to be used in n8n (or rewrite the code...) NODEFUNCTIONALLOW_EXTERNAL=lodash (environment variable) Import workflow Set credentials for Email & Nextcloud nodes Configure to use correct folder / custom filters Activate Custom filter examples: Only unread emails: Custom Email Config = ["UNSEEN"] Filter emails by 'to' address: Custom Email Config = [["TO", "example+invoices@posteo.de"]]
Send email via Gmail on workflow error
Send an email via Gmail when a workflow error occurs. The email subject line will contain the workflow name; the message body will contain the following information: Workflow name Error message Last node executed Execution URL Stacktrace Error workflows do not need to be activated in order to be used, but they do need to be selected in the Settings menu of whatever workflows you want to use it. To use this workflow, you'll need to: Create and select credentials in the Gmail node Choose the email recipient(s) in the Gmail node Save and select the created workflow as the "Error Workflow" in the Settings menu of whatever workflows you want to email on error
Automatically save & organize Outlook email attachments in OneDrive folders
Outlook to OneDrive This workflow automates the process of saving binary attachments from Outlook emails into newly created folders in OneDrive. It's ideal for users who regularly receive files and need them organized into separate folders without manual intervention. Each folder is automatically named based on the email subject and the current timestamp, allowing all attachments from that email to be stored inside the corresponding folder. This is particularly useful for streamlining document workflows, improving file traceability, and reducing the time spent on repetitive tasks like organizing client submissions, invoices, or internal reports. The configuration and setup of the workflow can be customized to meet the business or personal needs of the user. Its purpose is to automatically process binary attachments from Outlook emails and upload them to dynamically created folders in OneDrive. Overview Microsoft Outlook Trigger – Monitors your inbox for new emails. Filter – Ensures only emails with binary attachments proceed. Get Outlook Message – Retrieves the full email and downloads attachments. Create Folder – Makes a new folder in OneDrive based on the email subject and time. Split Out – Extracts each binary attachment. Merge– Combines folder and file data before upload. Upload File OneDrive – Uploads each binary file into the new folder. Need Help? Have Questions? For consulting and support, or if you have questions, please feel free to connect with me on LinkedIn or via email.
Summarise Slack channel activity for weekly reports with AI
This n8n template lets you summarize team member activity on Slack for the past week and generates a report. For remote teams, chat is a crucial communication tool to ensure work gets done but with so many conversations happening at once and in multiple threads, ideas, information and decisions usually live in the moment and get lost just as quickly - and all together forgotten by the weekend! Using this template, this doesn't have to be the case. Have AI crawl through last week's activity, summarize all threads and generate a casual and snappy report to bring the team back into focus for the current week. A project manager's dream! How it works A scheduled trigger is set to run every Monday at 6am to gather all team channel messages within the last week. Each message thread are grouped by user and data mined for replies. Combined, an AI analyses the raw messages to pull out interesting observations and highlights. The summarized threads of the user are then combined together and passed to another AI agent to generate a higher level overview of their week. These are referred to as the individual reports. Next, all individual reports are summarized together into a team weekly report. This allows understanding of group and similar activities. Finally, the team weekly report is posted back to the channel. The timing is important as it should be the first message of the week and ready for the team to glance over coffee. How to use Ideally works best per project and where most of the comms happens on a single channel. Avoid combining channels and instead duplicate this workflow for more channels. You may need to filter for specific team members if you want specific team updates. Customise the report to suit your organisation, team or the channel. You may prefer to be more formal if clients or external stakeholders are also present. Requirements Slack for chat platform Gemini for LLM (or switch for other models) Customising this workflow If the slack channel is busy enough already, consider posting the final report to email. Pull in project metrics to include in your report. As extra context, it may be interesting to tie the messages to production performance. Use an AI Agent to query for knowledgebase or tickets relevant to the messages. This may be useful for attaching links or references to add context. Channel not so busy or way too busy for 1 week? Play with the scheduled trigger and set an interval which works for your team.
Youtube outlier detector (find trending content based on your competitors)
Video explanation This n8n workflow helps you identify trending videos within your niche by detecting outlier videos that significantly outperform a channel's average views. It automates the process of monitoring competitor channels, saving time and streamlining content research. Included in the Workflow Automated Competitor Video Tracking Monitors videos from specified competitor channels, fetching data directly from the YouTube API. Outlier Detection Based on Channel Averages Compares each video’s performance against the channel’s historical average to identify significant spikes in viewership. Historical Video Data Management Stores video statistics in a PostgreSQL database, allowing the workflow to only fetch new videos and optimize API usage. Short Video Filtering Automatically removes short videos based on duration thresholds. Flexible Video Retrieval Fetches up to 3 months of historical data on the first run and only new videos on subsequent runs. PostgreSQL Database Integration Includes SQL queries for database setup, video insertion, and performance analysis. Configurable Outlier Threshold Focuses on videos published within the last two weeks with view counts at least twice the channel's average. Data Output for Analysis Outputs best-performing videos along with their engagement metrics, making it easier to identify trending topics. Requirements n8n installed on your machine or server A valid YouTube Data API key Access to a PostgreSQL database This workflow is intended for educational and research purposes, helping content creators gain insights into what topics resonate with audiences without manual daily monitoring.