4 templates found
Category:
Author:
Sort:

Telegram AI chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.

EduardBy Eduard
214722

🤖 Build an interactive AI agent with chat interface and multiple tools

How it works This template is a complete, hands-on tutorial that lets you build and interact with your very first AI Agent. Think of an AI Agent as a standard AI chatbot with superpowers. The agent doesn't just talk; it can use tools to perform actions and find information in real-time. This workflow is designed to show you exactly how that works. The Chat Interface (Chat Trigger): This is your window to the agent. It's a fully styled, public-facing chat window where you can have a conversation. The Brain (AI Agent Node): This is the core of the operation. It takes your message, understands your intent, and intelligently decides which "superpower" (or tool) it needs to use to answer your request. The agent's personality and instructions are defined in its extensive system prompt. The Tools (Tool Nodes): These are the agent's superpowers. We've included a variety of useful and fun tools to showcase its capabilities: Get a random joke. Search Wikipedia for a summary of any topic. Calculate a future date. Generate a secure password. Calculate a monthly loan payment. Fetch the latest articles from the n8n blog. The Memory (Memory Node): This gives the agent a short-term memory, allowing it to remember the last few messages in your conversation for better context. When you send a message, the agent's brain analyzes it, picks the right tool for the job, executes it, and then formulates a helpful response based on the tool's output. Set up steps Setup time: ~3 minutes This template is nearly ready to go out of the box. You just need to provide the AI's "brain." Configure Credentials: This workflow requires an API key for an AI model. Make sure you have credentials set up in your n8n instance for either Google AI (Gemini) or OpenAI. Choose Your AI Brain (LLM): By default, the workflow uses the Google Gemini node. If you have Google AI credentials, you're all set! If you prefer to use OpenAI, simply disable the Gemini node and enable the OpenAI node. You only need one active LLM node. Make sure it is connected to the Agent parent node. Explore the Tools: Take a moment to look at the different tool nodes connected to the Your First AI Agent node. This is where the agent gets its abilities! You can add, remove, or modify these to create your own custom agent. Activate and Test! Activate the workflow. Open the public URL for the Example Chat Window node (you can copy it from the node's panel). Start chatting! Try asking it things like: "Tell me a joke." "What is n8n?" "Generate a 16-character password for me." "What are the latest posts on the n8n blog?" "What is the monthly payment for a $300,000 loan at 5% interest over 30 years?"

Lucas PeyrinBy Lucas Peyrin
50139

Scheduled YouTube transcription with de-duplication using Transcript.io and Supabase

Scheduled YouTube Transcription with Duplicate Prevention Who's It For? This template is for advanced users, content teams, and data analysts who need a robust, automated system for capturing YouTube transcripts. It’s ideal for those who monitor multiple channels and want to ensure they only process and save each video's transcript once. What It Does This is an advanced, "set-it-and-forget-it" workflow that runs on a daily schedule to monitor YouTube channels for new content. It enhances the basic transcription process by connecting to your Supabase database to prevent duplicate entries. The workflow fetches all recent videos from the channels you track, filters out any that are too old, and then checks your database to see if a video's transcript has already been saved. Only brand-new videos are sent for transcription via the youtube-transcript.io API, with the final data (title, URL, full transcript, author) being saved back to your Supabase table. Requirements A Supabase account with a table to store video data. This table must have a column for the source_url to enable duplicate checking. An API key from youtube-transcript.io (offers a free tier). The Channel ID for each YouTube channel you want to track. How to Set Up Set Your Time Filter: In the "Max Days" node, set the number of days you want to look back for new videos (e.g., 7 for the last week). Add Channel IDs: In the "Channels To Track" node, replace the example YouTube Channel IDs with the ones you want to monitor. Configure API Credentials: Select the "Get Transcript from API" node. In the credentials tab, create a new "Header Auth" credential. Name it youtube-transcript-io and paste your API key into the "Value" field. The "Name" field should be x-api-key. Connect Your Supabase Account: This workflow uses Supabase in two places: "Check if URL Is In Database" and "Add to Content Queue Table". You must configure your Supabase credentials in both nodes. In each node, select your target table and ensure the columns are mapped correctly. Adjust the Schedule: The "Schedule Trigger" node is set to run once a day. Click it to adjust the time and frequency to your needs. Activate the Workflow: Save your changes and toggle the workflow to Active.

automediaBy automedia
165

Upload large files to Dropbox with chunking & web UI progress tracking

Dropbox Large File Upload System How It Works This workflow enables uploading large files (300MB+) to Dropbox through a web interface with real-time progress tracking. It bypasses Dropbox's 150MB single-request limit by breaking files into 8MB chunks and uploading them sequentially using Dropbox's upload session API. Upload Flow: User accesses page - Visits /webhook/upload-page and sees HTML form with file picker and folder path input Selects file - Chooses file and clicks "Upload to Dropbox" button JavaScript initiates session - Calls /webhook/start-session → Dropbox creates upload session → Returns sessionId Chunk upload loop - JavaScript splits file into 8MB chunks and for each chunk: Calls /webhook/append-chunk with sessionId, offset, and chunk binary data Dropbox appends chunk to session Progress bar updates (e.g., 25%, 50%, 75%) Finalize upload - After all chunks uploaded, calls /webhook/finish-session with final offset and target path File committed - Dropbox commits all chunks into complete file at specified path (e.g., /Uploads/video.mp4) Why chunking? Dropbox API has a 150MB limit for single upload requests. The upload session API (uploadsession/start, appendv2, finish) allows unlimited file sizes by chunking. Technical Architecture: Four webhook endpoints handle different stages (serve UI, start, append, finish) All chunk data sent as multipart/form-data with binary blobs Dropbox API requires cursor metadata (session_id, offset) in Dropbox-API-Arg header autorename: true prevents file overwrites Setup Steps Time estimate: ~20-25 minutes (first time) Create Dropbox app - Go to Dropbox App Console: Click "Create app" Choose "Scoped access" API Select "Full Dropbox" access type Name your app (e.g., "n8n File Uploader") Under Permissions tab, enable: files.content.write Copy App Key and App Secret Configure n8n OAuth2 credentials - In n8n: Create new "Dropbox OAuth2 API" credential Paste App Key and App Secret Set OAuth Redirect URL to your n8n instance (e.g., https://your-n8n.com/rest/oauth2-credential/callback) Complete OAuth flow to get access token Connect credentials to HTTP nodes - Add your Dropbox OAuth2 credential to these three nodes: "Dropbox Start Session" "Dropbox Append Chunk" "Dropbox Finish Session" Activate workflow - Click "Active" toggle to generate production webhook URLs Customize default folder (optional) - In "Respond with HTML" node: Find line: <input type="text" id="dropboxFolder" value="/Uploads/" ... Change /Uploads/ to your preferred default path Get upload page URL - Copy the production webhook URL from "Serve Upload Page" node (e.g., https://your-n8n.com/webhook/upload-page) Test upload - Visit the URL, select a small file first (~50MB), choose folder path, click Upload Important Notes File Size Limits: Standard Dropbox API: 150MB max per request This workflow: Unlimited (tested with 300MB+ files) Chunk size: 8MB (configurable in HTML JavaScript CHUNK_SIZE variable) Upload Behavior: Files with same name are auto-renamed (e.g., video.mp4 → video (1).mp4) due to autorename: true Upload is synchronous - browser must stay open until complete If upload fails mid-process, partial chunks remain in Dropbox session (expire after 24 hours) Security Considerations: Webhook URLs are public - anyone with URL can upload to your Dropbox Add authentication if needed (HTTP Basic Auth on webhook nodes) Consider rate limiting for production use Dropbox API Quotas: Free accounts: 2GB storage, 150GB bandwidth/day Plus accounts: 2TB storage, unlimited bandwidth Upload sessions expire after 4 hours of inactivity Progress Tracking: Real-time progress bar shows percentage (0-100%) Status messages: "Starting upload...", "✓ Upload complete!", "✗ Upload failed: [error]" Final response includes file path, size, and Dropbox file ID Troubleshooting: If chunks fail: Check Dropbox OAuth token hasn't expired (refresh if needed) If session not found: Ensure sessionId is passed correctly between steps If finish fails: Verify target path exists and app has write permissions If page doesn't load: Activate workflow first to generate webhook URLs Performance: 8MB chunks = ~37 requests for 300MB file Upload speed depends on internet connection and Dropbox API rate limits Typical: 2-5 minutes for 300MB file on good connection Pro tip: Test with a small file (10-20MB) first to verify credentials and flow, then try larger files. Monitor n8n execution list to see each webhook call and troubleshoot any failures. For production, consider adding error handling and retry logic in the JavaScript.

AnthonyBy Anthony
30
All templates loaded