6 templates found
Category:
Author:
Sort:

Telegram AI chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.

EduardBy Eduard
214722

🛻 AI agent for logistics order processing with GPT-4o, Gmail and Google Sheet

Tags: Supply Chain, Logistics, AI Agents Context Hey! I’m Samir, a Supply Chain Data Scientist from Paris, and the founder of LogiGreen Consulting. We design tools to help companies improve their logistics processes using data analytics, AI, and automation—to reduce costs and minimize environmental impacts. >Let’s use N8N to improve logistics operations! 📬 For business inquiries, you can add me on LinkedIn Who is this template for? This workflow template is designed for logistics or manufacturing operations that receive orders by email. [](https://www.youtube.com/watch?v=kQ8dO_30SB0) The example above illustrate the challenge we want to tackle using an AI Agent to parse the information and load them in a Google sheet. If you want to understand how I built this workflow, check my detailed tutorial: [](https://www.youtube.com/watch?v=kQ8dO_30SB0) 🎥 Step-by-Step Tutorial How does it work? The workflow is connected to a Gmail Trigger to open all the emails that include Inbound Order in their subject. The email is parsed by an AI Agent equipped with OpenAI's GPT to collect all the information. The results are pulled in a Google Sheet. [](https://www.youtube.com/watch?v=kQ8dO_30SB0) These orderlines can then be transferred to warehouse teams to prepare *order receiving. What do I need to get started? You’ll need: Gmail and Google Drive Accounts with the API credentials to access it via n8n An OpenAI API key (GPT-4o) for the chat model. A Google Sheet with these columns: PONUMBER, EXPECTEDDELIVERY DATE, SKU_ID, QUANTITY Next Steps Follow the sticky notes in the workflow to configure each node and start using AI to support your logistic operations. 🚀 Curious how N8N can transform your logistics operations? 📬 Let’s connect on LinkedIn Notes An example of email is included in the template so you can try it with your mailbox. This workflow was built using N8N version 1.82.1 Submitted: March 28, 2025

Samir SaciBy Samir Saci
10795

Process documents with OCR, analytics & Google Drive using PDF Vector

Overview Organizations dealing with high-volume document processing face challenges in efficiently handling diverse document types while maintaining quality and tracking performance metrics. This enterprise-grade workflow provides a scalable solution for batch processing documents including PDFs, scanned documents, and images (JPG, PNG) with comprehensive analytics, error handling, and quality assurance. What You Can Do Process thousands of documents in parallel batches efficiently Monitor performance metrics and success rates in real-time Handle diverse document formats with automatic format detection Generate comprehensive analytics dashboards and reports Implement automated quality assurance and error handling Who It's For Large organizations, document processing centers, digital transformation teams, enterprise IT departments, and businesses that need to process thousands of documents reliably with detailed performance tracking and analytics. The Problem It Solves High-volume document processing without proper monitoring leads to bottlenecks, quality issues, and inefficient resource usage. Organizations struggle to track processing success rates, identify problematic document types, and optimize their workflows. This template provides enterprise-grade batch processing with comprehensive analytics and automated quality assurance. Setup Instructions: Configure Google Drive credentials for document folder access Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials with appropriate rate limits Set up batch processing parameters (batch size, retry logic) Configure quality thresholds and validation rules Set up analytics dashboard and reporting preferences Configure error handling and notification systems Key Features: Parallel batch processing for maximum throughput Support for mixed document formats (PDFs, Word docs, images) OCR processing for handwritten and scanned documents Comprehensive analytics dashboard with success rates and performance metrics Automatic document prioritization based on size and complexity Intelligent error handling with automatic retry logic Quality assurance checks and validation Real-time processing monitoring and alerts Customization Options: Configure custom document categories and processing rules Set up specific extraction templates for different document types Implement automated workflows for documents that fail quality checks Configure credit usage optimization to minimize costs Set up custom analytics and reporting dashboards Add integration with existing document management systems Configure automated notifications for processing completion or errors Implementation Details: The workflow uses intelligent batching to process documents efficiently while monitoring performance metrics in real-time. It automatically handles different document formats, applies OCR when needed, and provides detailed analytics to help organizations optimize their document processing operations. The system includes sophisticated error recovery and quality assurance mechanisms. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.

PDF VectorBy PDF Vector
889

Veo 3 ad script builder (GPT-4 + Google Docs integration)

🚀 Overview This n8n automation workflow streamlines the creation of professional video ad scripts tailored for Veo 3 by turning basic user inputs into cinematic, consistent, and highly structured prompts. Whether you're a marketing agency, content creator, or small business, this workflow ensures high-quality AI video generation at scale—without needing a professional copywriter or creative director. 🛠️ How It Works 📝 Form Trigger Captures initial inputs via a web form: Ad Category: Dropdown (Before & After, Brand Awareness, UGC Style, Educational) Environment: Text (short description of setting/location) Script: Raw ad copy Spokesperson: Basic character idea (e.g., “young woman, confident, skincare expert”) 🧠 Build Persona Node Uses GPT-4o-Mini to expand spokesperson input into a full character description. Output includes: Age Race Clothing Hairstyle Tone/Demeanor Ensures visual consistency in AI-generated video scenes. 🌆 Build Environment Node Transforms environment text into a detailed, cinematic setting. Adds descriptive elements like: Lighting style Architecture or background Atmosphere (e.g., soft morning glow, modern interiors) ✂️ Generate Copy Node Breaks the full script into 10-second readable segments for smooth pacing. Helps tools like Veo 3 maintain flow, coherence, and readability. Outputs concise, camera-ready lines for each section. 📄 Google Docs Integration Auto-updates a ready-to-use script template in Google Docs. Replaces placeholders with content from the 3 AI nodes: Cinematic shot: (Build Persona) in (Build Environment). They confidently address the camera: (Generate Copy). Professional lighting emphasizes credibility throughout. Final result: a polished video ad prompt, ready for Veo 3 or any AI video tool. 💸 Cost Efficiency Advantage Using the Veo 3 API directly costs $0.50 per second — meaning a simple 5-second video costs $3.75, and a 6-clip project could run up to $24. However, this workflow is optimized for Veo 3's Fast on Flow mode, which costs just $0.20 per clip. 🔍 Real Example: 6 clips = $1.20 Savings = Over 90% More room for testing, iteration, and scaling This isn’t just smart automation — it’s financially strategic. 🔧 Setup Instructions ✅ Prerequisites Active n8n instance (Cloud or Self-Hosted) OpenAI API Key (for GPT-4) Google Docs API credentials ⚙️ Configuration Steps Import the workflow into your n8n instance Add OpenAI credentials to the AI nodes Set up Google Docs integration in n8n Copy this template: 👉 Veo 3 Script Template Update the Google Docs node with your copy’s URL Test the flow by submitting the form 💼 Use Cases Marketing Agencies: Quickly generate ad scripts for multiple campaigns Content Creators: Scale up content production without creative burnout AI Video Producers: Maintain detailed, consistent inputs for each render Small Business Owners: Create pro-grade ads without outsourcing 🎯 Benefits ✨ Consistency: AI ensures visual + narrative alignment ⚡ Speed: Go from rough idea to finished ad script in seconds 📈 Scalability: Handle multiple clients and campaigns simultaneously 🎥 Quality Output: Templates and pacing are optimized for real-world video use 💰 Cost Savings: Save up to 90% vs API-only generation..

David OlusolaBy David Olusola
695

Build a GLPI knowledge base RAG pipeline with Google Gemini and PostgreSQL

Description This workflow automates the creation of a Retrieval-Augmented Generation (RAG) pipeline using content from the GLPI Knowledge Base. It retrieves and processes FAQ articles directly via the GLPI API, cleans and vectorizes the content using pgvector in PostgreSQL, and prepares the data for use by LLM-powered AI agents. What Problem Does This Solve? Manually building a RAG pipeline from a GLPI knowledge base requires integrating multiple tools, cleaning data, and managing embeddings—tasks that are often complex and repetitive. This subworkflow simplifies the entire process by automating data retrieval, transformation, and vector storage, allowing you to focus on building intelligent support agents or chatbots powered by your internal documentation. Features Connects to GLPI via API to fetch FAQ articles Cleans and normalizes content for better embedding quality Generates vector embeddings using Google Gemini (or another model) Stores embeddings in a PostgreSQL database with pgvector Fully modular: easily integrate with any RAG-ready LLM pipeline Prerequisites Before using this subworkflow, make sure you have: A GLPI instance installed on a Linux server with API access enabled A PostgreSQL database with the pgvector extension installed An OpenAI API key (or alternative embedding provider) n8n instance (self-hosted or cloud) Suggested Usage This subworkflow is intended to be part of a larger AI pipeline. Attach it to a scheduled workflow (e.g. daily sync) or use it in response to updates in your GLPI base. Ideal for internal support bots, IT documentation assistants, and help desk AI agents that rely on up-to-date knowledge.

Thiago Vazzoler LoureiroBy Thiago Vazzoler Loureiro
259

Random Process Scheduler

Random Process Scheduler Turn predictable automations into human-like activity with random scheduling across easily customisable time slots. Perfect for content publishing with organic scheduling patterns, social media automation, API systems that need to avoid rate limiting, or any automation requiring randomised timing control across multiple periods. All times are configured in local timezone with automatic UTC conversion for technical operations. Features easy slot management with chronological ordering, gap support, and 24-hour window scheduling. How to use Set schedulerExecutionTime in Init node (match it to Schedule Trigger cron) Define time slots with start/end hours and probability values (see below) Configure Execute sub-process with your own workflow ID Configure SMTP credentials, local time zone and optionally project paths Step by Step Configuration Two variables to set in the Init node's "Custom Configuration" section: When the scheduler runs daily: javascript const schedulerExecutionTime = 1; // 1am (24h clock; must match Schedule Trigger node) Your execution slots: javascript const subprocessTimeSlots = { morning: { start: 6, end: 12, probability: 0.85 }, afternoon: { start: 12, end: 18, probability: 0.85 }, evening: { start: 18, end: 24, probability: 0.5 } }; That's it. Three slots, three probabilities. Add or remove slots, adjust times - it just works. Real-world example with gaps: javascript const subprocessTimeSlots = { night: { start: 0, end: 2, probability: 0.1 }, // Optional late activity morning: { start: 6, end: 10, probability: 0.85 }, // Gap from 2-6am (no execution) noon: { start: 12, end: 14, probability: 0.85 }, // Gap from 10am-12pm afternoon: { start: 16, end: 18, probability: 0.85 }, // Gap from 2-4pm evening: { start: 20, end: 24, probability: 0.5 } // Gap from 6-8pm }; Configure Execute sub-process with your workflow ID Your sub-workflow MUST start with a Wait node and be configured with {{ $json.executions.relativeDelaySeconds }} as “Wait Amount”. Additional Configuration SMTP credential Time zone (Init node): const LOCALTIMEZONE = $env.GENERICTIMEZONE || 'Europe/Paris'; by default Project paths (Init node): // ⚠️ Your email here • const N8NADMINEMAIL = $env.N8NADMINEMAIL || 'yourmail@world.com'; // ⚠️ Your projects’ ROOT folder on your mapped server here • const N8NPROJECTSDIR = $env.N8NPROJECTSDIR || '/files/n8n-projects-data'; // ⚠️ Your project’s folder name here for logging • const PROJECTFOLDERNAME = "RPS"; Requirements n8n instance running in Docker with file system access Volume mapping for persistent storage (e.g., /local-files:/files) De-activate Read/Write nodes & Email sends if you do not want to use logging features (file system access optional) SMTP server for notifications n8n API credentials for error handling Target sub-workflow with Wait node to handle relativeDelaySeconds IMPORTANT: Your sub-workflow MUST start with a Wait node and be configured with {{ $json.executions.relativeDelaySeconds }} as “Wait Amount”. How it works 24-Hour Window Algorithm: Schedules executions from current time until next scheduler run, handling timezone conversions and cross-midnight scenarios Automatic Validation: Checks for overlaps, invalid time ranges, and probability values on every run Chronological Processing: Processes slots in order regardless of how you define them Inclusive/Exclusive Boundaries: 6-12 slot runs from 6:00:00 to 11:59:59 (start inclusive, end exclusive) Chained Execution: Each subprocess receives a relativeDelaySeconds value for sequential execution with calculated intervals Comprehensive Monitoring: Tracks planned vs actual execution times with delay calculations and detailed logging Configuration Rules No Overlap: Slots cannot overlap (6-10 and 9-12 is invalid) Valid Ranges: Start 0-23, end 0-24, start < end Valid Probability: 0 to 1 (0.85 = 85% chance) Gaps Allowed: Skip hours completely (e.g., no execution 2-6am) Emoji Support: Use "night", "morning", "noon", "evening" keywords in slot names for visual logs Use Cases: Content publishing with organic scheduling patterns Social media automation with human-like posting intervals API systems avoiding rate limits through unpredictable timing Business hours automation with precise timing control

FlorentBy Florent
31
All templates loaded