15 templates found
Category:
Author:
Sort:

Process large documents with OCR using SubworkflowAI and Gemini

Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation: https://docs.subworkflow.ai/category/api-reference Join the discord: https://discord.gg/RCHeCPJnYw

JimleukBy Jimleuk
8424

Create a table in Postgres and insert data

Companion workflow for Postgres node docs

amudhanBy amudhan
4425

Send a message via a Lark Bot

What this workflow does This workflow in n8n demonstrates how to send a message in Lark using a Lark bot. It begins with a manual trigger and then retrieves the necessary Lark token via a POST request. The token is used to authenticate and send a message to a specific chat using the Lark API. The input node provides the required app_id, app_secret, chat_id, and message content. After obtaining the token, the message is sent with the Lark API's message/v4/send/ endpoint. Who This Is For This n8n workflow is ideal for organizations, teams, and developers who need to automate message sending within Lark, especially those managing notifications, alerts, or team reminders. It can help users reduce manual messaging tasks by leveraging a Lark bot to deliver messages at specific intervals or based on particular conditions, enhancing team communication and responsiveness. Setup Fill the Input node with your values Exchange the bearer token in the Send Message node with your token Author: Hiroshi

HiroshiBy Hiroshi
4048

Get email notifications for newly uploaded Google Drive files

This workflow sends out email notifications when a new file has been uploaded to Google Drive. The workflow uses two nodes: Google Drive Trigger: This node will trigger the workflow whenever a new file has been uploaded to a given folder Send Email: This node sends out the email using data from the previous Google Drive Trigger node.

TomBy Tom
2732

Search and compare flights with DeepSeek AI and Google Flights API

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works Takes departure city, destination, and travel dates from the user. Searches multiple airlines for flight options and compares price, duration, and stops. Suggests flexible travel dates for better deals. Tracks selected flights and sends real-time price alerts. Provides 24/7 AI-powered travel recommendations. Set up steps Add credentials for your chosen Chat Model (DeepSeek in this case) and SerpAPI (Google Flights). In the AI Agent node, link: Chat Model → DeepSeek Chat Model node. Memory → Simple Memory node (for conversation context). Tool → Google_flights search in SerpApi node. In the SerpApi node, set engine=google_flights and map input fields for departure, destination, and travel dates. Test the workflow by providing a sample itinerary request in the Chat node’s input. Review AI responses to ensure it searches, compares, and returns relevant flight options.

Fakhar KhanBy Fakhar Khan
1735

Build a WhatsApp assistant for text, audio & images using GPT-4o & Evolution API

Build an intelligent WhatsApp assistant that automatically responds to customer messages using AI. This template uses the Evolution API community node for WhatsApp integration and OpenAI for natural language processing, with built-in conversation memory powered by Redis to maintain context across messages. > ⚠️ Self-hosted requirement: This workflow uses the Evolution API community node, which is only available on self-hosted n8n instances. It will not work on n8n Cloud. What this workflow does Receives incoming WhatsApp messages via Evolution API webhook Filters and processes text, audio, and image messages Transcribes audio messages using OpenAI Whisper Analyzes images using GPT-4 Vision Generates contextual responses with conversation memory Sends replies back through WhatsApp Who is this for? Businesses wanting to automate customer support on WhatsApp Teams needing 24/7 automated responses with AI Developers building multimodal chat assistants Companies looking to reduce response time on WhatsApp Setup instructions Evolution API: Install and configure Evolution API on your server. Create an instance and obtain your API key and instance name. Redis: Set up a Redis instance for conversation memory. You can use a local installation or a cloud service like Redis Cloud. OpenAI: Get your API key from platform.openai.com with access to GPT and Whisper models. Webhook: Configure your Evolution API instance to send webhooks to your n8n webhook URL. Customization options Modify the system prompt in the AI node to change the assistant's personality and responses Adjust the Redis TTL to control how long conversation history is retained Add additional message type handlers for documents, locations, or contacts Integrate with your CRM or database to personalize responses Credentials required Evolution API credentials (self-hosted) OpenAI API key Redis connection

Antonio GassoBy Antonio Gasso
1686

Automate web research & analysis with Oxylabs & GPT for comprehensive reports

Fully automate deep research from start to finish: scrape Google Search results, select relevant sources, scrape & analyze each source in parallel, and generate a comprehensive research report. Who is this for? This workflow is for anyone who needs to research topics quickly and thoroughly: content creators, marketers, product managers, researchers, journalists, students, or anyone seeking deep insights without spending hours browsing websites. If you find yourself opening dozens of browser tabs to piece together information, this template will automate that entire process and deliver comprehensive reports in minutes. How it works Submit your research questions through n8n's chat interface (include as much context as you need) AI generates strategic search queries to explore different angles of your topic (customize the number of queries as needed) Oxylabs scrapes Google Search results for each query (up to 50 results per query) AI evaluates and selects sources that are the most relevant and authoritative Content extraction runs in parallel as Oxylabs scrapes each source and AI extracts key insights Summaries are collected in n8n's data table for final processing AI synthesizes everything into a comprehensive research report with actionable insights See the complete step-by-step tutorial on the n8n blog. Requirements Oxylabs AI Studio API key – Get a free API key with 1000 credits OpenAI API key (or use alternatives like Claude, Gemini, and local Ollama LLMs) Setup Install Oxylabs AI Studio as shown on this page Set your API keys: Oxylabs AI Studio OpenAI Create a data table Select the table name in each data table node Create a sub-workflow: Select the 3 nodes (Scrape content, Summarize content, Insert row) Right-click Select “Convert 3 nodes to sub-workflow” Edit the sub-workflow settings for for parallel execution: Mode: Run once for each item Options → Add Option → disable “Wait For Sub-Workflow Completion” Once you finish all these setup steps, you can run the workflow through n8n's chat interface. For example, send the following message: I'm planning to build a wooden summer house and would appreciate guidance on the process. What are the key considerations I should keep in mind from planning through completion? I'm particularly interested in the recommended construction steps and which materials will ensure long-term durability and quality. Customize this workflow for your needs Feel free to modify the workflow to fit the scale and final output your project requires: To reuse this workflow, clear the data table after the final analysis by adding a Data table node with the Delete row(s) action Scale up by processing more search queries, increasing results per query beyond 10, and selecting additional relevant URLs Enable JavaScript rendering in Oxylabs AI Studio (Scraper) node to ensure all content is gathered Adjust the system prompts in LLM nodes to fit your specific research goals Explore other AI Studio apps like Browser Agent for interactive browser control or Crawler for mapping entire websites Connect other nodes like Google Sheets, Notion, Airtable, or webhooks to route results where you need them

VytenisBy Vytenis
947

Google Sheets and QuickBooks expenses automation template

Automatically Upload Expenses to QuickBooks from Google Sheets What It Does This n8n workflow template automates the process of uploading categorized expenses from Google Sheets into QuickBooks Online. It leverages your Google Sheets data to create expense entries in QuickBooks with minimal manual effort, streamlining the accounting process. Prerequisites QuickBooks Online Credential: Set up your QuickBooks Online connection in n8n for expense creation. Google Sheets Credential: Set up your Google Sheets connection in n8n to read and write data. How It Works Refresh Google Sheets Data: The workflow will first refresh the list of vendors and chart of accounts from your Google Sheets template. Import Bank Transactions: Open the provided Google Sheets template and copy-paste your transactions from your online banking CSV file. Categorize Transactions: Quickly categorize the transactions in Google Sheets, or assign this task to a team member. Run the Workflow: Once the transactions are categorized, run the workflow again, and each expense will be created automatically in QuickBooks Online. Example Use Cases Small Business Owners: Automatically track and upload monthly expenses to QuickBooks Online without manually entering data. Accountants: Automate the transfer of bank transactions to QuickBooks, streamlining the financial process. Bookkeepers: Quickly categorize and upload business expenses to QuickBooks with minimal effort. Setup Instructions Connect Your Google Sheets and QuickBooks Credentials: In n8n, connect your Google Sheets and QuickBooks accounts. Follow the credential setup instructions for both services. Setup the Google Sheets Node: Link the specific Google Sheet that contains your expense data. Make sure the sheet includes the correct columns for transactions, vendors, and accounts. Setup the QuickBooks Node: Configure the QuickBooks Online node to create expense entries in QuickBooks from the data in your Google Sheets. Setup the HTTP Node for API Calls: Use the HTTP node to make custom API calls to QuickBooks Configure the QuickBooks Realm ID: Obtain the QuickBooks Realm ID from your QuickBooks Online Developer account to use for custom API calls. This ensures the workflow targets the correct QuickBooks instance. How to Use Import Transactions: Copy and paste your bank transactions from the CSV into the provided Google Sheets template. Categorize Transactions: Manually categorize the transactions in the sheet, or delegate this task to another person to ensure they’re correctly tagged (e.g., Utilities, Office Supplies, Travel). Run the Workflow: Execute the workflow to automatically upload the categorized expenses into QuickBooks. Verify in QuickBooks: After the workflow runs, log into QuickBooks Online to confirm the expenses have been created and categorized correctly. Free Google Sheets Template To get started quickly, download my free Google Sheets template that includes pre-configured sheets for bank transactions, vendors, and chart of accounts. This template will make it easier for you to import and categorize your expenses before running the n8n workflow. Download the Free Google Sheets Template Customization Options Category Mapping: Customize how categories in Google Sheets are mapped to QuickBooks expense types. Additional API Calls: Add custom API calls if you need extra functionality, such as creating custom reports or syncing additional data. Notifications: Configure email or Slack notifications to alert you when the expenses have been successfully uploaded. Why It's Useful Time-Saving: Automatically upload and categorize expenses in QuickBooks without needing to enter them manually. Error Reduction: Minimize human error by automating the process of uploading and categorizing transactions. Efficiency: Connects Google Sheets to QuickBooks, making it easy to manage expenses in one place without having to toggle between multiple apps. Accuracy: Syncs data between Google Sheets and QuickBooks in a structured, automated way for consistent and reliable financial reporting. Flexibility: Allow external users or lower-permission employees to categorize financial transactions without providing direct access to QBO

Rosh RagelBy Rosh Ragel
460

Track an event in Segment

No description available.

tanaypantBy tanaypant
451

Build comprehensive entity profiles with GPT-4, Wikipedia & vector DB for content

This n8n template demonstrates how to build an intelligent entity research system that automatically discovers, researches, and creates comprehensive profiles for business entities, concepts, and terms. Use cases are many: Try automating glossary creation for technical documentation, building standardized definition databases for compliance teams, researching industry terminology for content creation, or developing training materials with consistent entity explanations! Good to know Each entity research typically costs $0.08-$0.34, depending on the complexity and sources required. The workflow includes smart duplicate detection to minimize unnecessary API calls. The workflow requires multiple AI services and a vector database, so setup time may be longer than simpler templates. Entity definitions are stored locally in your Qdrant database and can be reused across multiple projects. How it works The workflow checks your existing knowledge base first to avoid duplicate research on entities you've already processed. If the entity is new, an AI research agent intelligently combines your vector database, Wikipedia, and live web research to gather comprehensive information. The system creates structured entity profiles with definitions, categories, examples, common misconceptions, and related entities - perfect for business documentation. AI-powered validation ensures all entity profiles are complete, accurate, and suitable for business use before storage. Each researched entity gets stored in your Qdrant vector database, creating a growing knowledge base that improves research efficiency over time. The workflow includes multiple stages of duplicate prevention to avoid unnecessary processing and API costs. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as form submissions, content management systems, or automated content pipelines. You can research multiple related entities in sequence, and the system will automatically identify connections and relationships between them. Provide topic and audience context to get tailored explanations suitable for your specific business needs. Requirements OpenAI API account for o4-mini (entity research and validation) Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Automate Web Research with GPT-4, Claude & Apify for Content Analysis and Insights workflow (for live web research capabilities) Anthropic API account for Claude Sonnet 4 (used by the web research workflow) Apify account for web scraping (used by the web research workflow) Customizing this workflow Entity research automation can be adapted for many specialized domains. Try focusing on specific industries like legal terminology (targeting official legal sources), medical concepts (emphasizing clinical accuracy), or financial terms (prioritizing regulatory definitions). You can also customize the validation criteria to match your organization's specific quality standards.

Peter ZendzianBy Peter Zendzian
448

🛠️ Action Network tool MCP server 💪 all 23 operations

Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all Action Network Tool operations to AI agents. Zero configuration needed - all 23 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Action Network Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Action Network Tool tool with full error handling 📋 Available Operations (23 total) Every possible Action Network Tool operation is included: 👥 Attendance (3 operations) • Create an attendance • Get an attendance • Get many attendances 📅 Event (3 operations) • Create an event • Get an event • Get many events 👥 Person (4 operations) • Create a person • Get a person • Get many people • Update a person 🔧 Persontag (2 operations) • Add a person tag • Remove a person tag 📝 Petition (4 operations) • Create a petition • Get a petition • Get many petitions • Update a petition 📝 Signature (4 operations) • Create a signature • Get a signature • Get many signatures • Update a signature 🏷️ Tag (3 operations) • Create a tag • Get a tag • Get many tags 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Action Network Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Action Network Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.

David AshbyBy David Ashby
201

Generate animal battle videos with Flux AI, Creatomate & multi-platform publishing

Author: Jadai Kongolo Overview This comprehensive n8n workflow automates the entire production pipeline for creating viral "versus" style battle videos. The system generates dramatic AI-powered fight scenes between animals (or any characters you choose), complete with photorealistic imagery, cinematic effects, and automatic multi-platform publishing. Perfect for content creators looking to generate engaging short-form content at scale without manual editing or design work. Use Cases Viral Social Media Content: Automatically produce trending "X vs Y" battle videos that perform exceptionally well on TikTok, Instagram Reels, and YouTube Shorts. These comparison-style videos consistently generate high engagement and shares. Educational Entertainment: Create visually stunning educational content comparing animals, historical figures, sports teams, or any competitive matchups while maintaining viewer interest through dramatic AI-generated imagery. Automated Content Pipeline: Build a hands-free content factory that can produce multiple videos per day on schedule, complete with automatic posting to all major social platforms through integrated social media management tools. 👉 check out the UGC version of this here How It Works Stage 1 - Scene Generation The workflow begins by fetching a main character from your Google Sheets database (filtered by "To Do" status). An AI agent powered by GPT-4.1-mini then generates eight unique opponents from your specified category, ensuring each comes from a different environment or background for maximum variety and interest. Stage 2 - AI Image Creation The system creates three distinct types of images for each matchup: Close-Up Portraits: Generates fierce, intimidating close-up shots of both the main character and each opponent using Flux image generation through PiAPI. The AI creates hyper-realistic, photorealistic images showing each character roaring with detailed textures, dramatic lighting, and threatening expressions. Battle Aftermath Scenes: A separate AI agent determines the realistic winner based on each character's strengths, then generates a dramatic full-body scene showing the victor standing dominantly over the defeated opponent. These images include visible battle scars, wounds, and cinematic composition that makes the outcome unmistakably clear. The workflow includes intelligent polling mechanisms (90-second waits) to ensure all images are fully generated before proceeding, then aggregates and stores all image URLs in your Google Sheet for reference. Stage 3 - Video Assembly Using Creatomate's video rendering API, the workflow combines all generated images with background music and animated transitions into a polished final video. The template creates a fast-paced montage showing all eight battles with "VS" graphics and dynamic cuts timed to music beats. Stage 4 - Multi-Platform Publishing Once rendered, the video is automatically uploaded to Blotato's social media management platform and simultaneously published to: Instagram Reels with optimized captions TikTok with proper AI-generated content disclosure YouTube Shorts as unlisted for review The workflow updates your Google Sheet with "Created" status and final video URL for tracking and analytics. Customization Options Content Themes Modify the Google Sheet to change from animals to any category: superheroes, historical warriors, vehicles, mythical creatures, sports teams, etc. Adjust AI prompts in the "Scene Creator" node to control opponent selection criteria Edit the "Image Prompt Generator" to customize visual style (fantasy, sci-fi, realistic, cartoon, etc.) Video Production Change video dimensions in "Generate Close Ups" and "Generate Scene" nodes for different platform requirements Replace the Creatomate template with your own design for different visual styles Swap background music by updating the music source URL in the "Render Video" node Adjust the number of battles per video (currently 8 scenes) Publishing Settings Configure posting schedules via the Schedule Trigger node Modify platform-specific settings (privacy levels, comments, duets) in Instagram/TikTok/YouTube nodes Add or remove social platforms by connecting additional Blotato API endpoints Customize captions using data from your Google Sheet AI Models Switch between different OpenRouter models for cost/quality tradeoffs Use GPT-4.1 for complex winner determination and GPT-4.1-mini for faster scene generation Experiment with different Flux models through PiAPI for various artistic styles Prerequisites Google Sheets: Connected Google account with access to the workflow template OpenRouter API: For GPT-4.1 and GPT-4.1-mini access PiAPI Account: For Flux image generation (use referral code for bonus credits) Creatomate Account: For video rendering with template access Blotato Account: For multi-platform social media publishing (use promo code "NATE30" for 30% off for 6 months) --- 🛠️ Setup Guide Make a copy of this Google Sheet Template and connect it to the five Google Sheet nodes in the workflow: Get Main Character Add Close Ups Add Winner Get Elements Update Video Status Connect your OpenRouter API key to the two OpenRouter nodes in the "Output Parser & Chat Models" section: GPT 4.1-mini GPT 4.1 Create a PiAPI account and connect your API key to: Generate Close Ups Generate Scene Get Close Ups Get Winners Create a Creatomate account and connect your template ID and API key to the Render Video node. You can duplicate the same template shown in the video by using the source code linked in the same Skool post where you downloaded the workflow. Connect your Blotato account and get your API key to enable auto-publishing: Configure the Upload to Blotato node Add your account IDs to Instagram, TikTok, and YouTube nodes Customize the Schedule Trigger node to set your desired posting frequency (daily, weekly, etc.) --- The Generate authentic, influencer-style UGC videos on autopilot version of this AI video generator can be found here.

Jadai kongoloBy Jadai kongolo
188