7 templates found
Category:
Author:
Sort:

Create, update, and get a document in Google Cloud Firestore

This example workflow allows you to create, update, and get a document in Google Cloud Firestore. The workflow uses the Set node to set the data, however, you might receive data from a different source. Add the node that receives the data before the Set node and set the values you want to insert in a document, in the Set node. Also, update the Columns/ attributes fields in the Google Cloud Firestore node.

Harshil AgrawalBy Harshil Agrawal
8710

Process large documents with OCR using SubworkflowAI and Gemini

Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs https://docs.subworkflow.ai/category/api-reference How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation: https://docs.subworkflow.ai/category/api-reference Join the discord: https://discord.gg/RCHeCPJnYw

JimleukBy Jimleuk
8424

๐Ÿค– Instagram MCP AI agent โ€“ read, reply & manage comments with GPT-4o

๐Ÿค– Instagram AI Agent with MCP Server โ€“ Built for Smart Engagement and Automation Hi! Iโ€™m Amanda ๐Ÿฅฐ I build intelligent automations with n8n and Make. This powerful workflow was designed to help teams automatically handle Instagram interactions with AI. Using Meta Graph API, LangChain, MCP Server, and GPT-4o, it allows your AI agent to search for posts, read captions, fetch comments, and even reply or message followers, all through structured tools. --- ๐Ÿ”ง What the workflow does Searches for recent media using Instagram ID and access token Reads and extracts captions or media URLs Fetches comments and specific replies from each post Replies to comments automatically with GPT-generated responses Sends direct messages to followers who commented Maps user input and session to keep memory context via LangChain Communicates via Server-Sent Events (SSE) using your MCP Server URL --- ๐Ÿงฐ Nodes & Tech Used LangChain Agent + Chat Model with GPT-4o Memory Buffer for session memory toolHttpRequest to search media, comments, and send replies MCP Trigger and MCP Tool (custom SSE connection) Set node for input and variable assignment Webhook and JSON for Instagram API structure --- โš™๏ธ Setup Instructions Create your Instagram App in Meta Developer Portal Add your Instagram ID and Access Token in the Set node Update the MCP Server Tool URL in the MCP Instagram node Use your n8n server URL (e.g. https://yourdomain.com/mcp/server/instagram/sse) Trigger the workflow using the included LangChain Chat Trigger Interact via text to ask the agent to: โ€œGet latest postsโ€ โ€œReply to comment X with this messageโ€ โ€œSend DM to this user about...โ€ --- ๐Ÿ‘ฅ Who this is for Social media teams managing multiple comments Brands automating engagement with followers Agencies creating smart, autonomous digital assistants Developers building conversational Instagram bots --- โœ… Requirements Meta Graph API access Instagram Business account n8n instance (Cloud or Self-hosted) MCP Server configured (SSE Endpoint enabled) OpenAI API Key (for GPT-4o + LangChain) --- ๐ŸŒ Want to use this workflow? โค๏ธ Buy workflows: https://iloveflows.com โ˜๏ธ Try n8n Cloud: https://n8n.partnerlinks.io/amanda

Amanda BenksBy Amanda Benks
5511

Sync blog posts from Notion to Webflow

Who is this for? This template is for everyone who manages their blog entries in Notion and want to have an easy way to transform them to Webflow. What this workflow does This workflow syncs your blog posts saved in a Notion Database once a day to Webflow. Sync Notion properties, rich text and cover image with your collection. Works with most elements: H1, H2, H3, normal text, bold text, italic text, links, quotes, bulleted lists, numbered lists, and images (under 4MB). Set up steps Connect your accounts. Add a "slug" field in Notion. Add a "Sync to Webflow?" checkbox in Notion. Run a test and map your collection data. Whenever the workflow runs, all the checked posts will be updated in the Webflow collection, whether it's a new post or an existing one.

Giovanni RuggieriBy Giovanni Ruggieri
3189

End-to-end YouTube video automation with HeyGen, GPT-4 & Avatar videos

๐ŸŽฅ End-to-End YouTube Video Automation Workflow with n8n, HeyGen & AI Automate the entire YouTube content creation pipeline โ€” from video idea to AI-generated avatar video, upload, metadata generation, and publishing โ€” with zero manual intervention! --- Who is this for? This template is perfect for: Affiliate marketers (e.g., PartnerStack, SaaS products) YouTube creators scaling video production Agencies managing client content Educators and coaches delivering automated video lessons Entrepreneurs running faceless YouTube channels --- ๐Ÿง  What problem does it solve? Publishing consistent, high-quality video content is essential for audience growth and monetization. But manually creating each video โ€” researching, writing, recording, uploading, and optimizing โ€” is slow, repetitive, and unsustainable at scale. This workflow solves that by automating: โœ… Content sourcing from Google Sheets โœ… Script generation with AI โœ… Avatar video creation via HeyGen โœ… YouTube upload and metadata โœ… Final publishing and status update All done without touching a single button. Schedule it weekly and watch videos go live while you sleep. --- โš™๏ธ What this workflow does ๐Ÿ“ฅ Reads video ideas from a Google Sheet (e.g., PartnerStack affiliate product) ๐ŸŒ Fetches product details from the web using HTTP Request ๐Ÿง  Generates a promotional video transcript using an AI agent ๐ŸŽ™ Converts the script to an avatar video using HeyGen API โณ Waits for the video to render and fetches download URL โฌ†๏ธ Uploads the video to YouTube via API ๐Ÿง  Generates title, description, tags, and hashtags using AI ๐Ÿ”„ Updates video metadata and changes visibility to Public ๐Ÿ“Š Logs publication details back to Google Sheets ๐Ÿ‘ค Optional human-in-the-loop step before publishing --- ๐Ÿ›  Setup ๐Ÿ”Œ Connect the following integrations: Google Sheets (or Airtable) HeyGen API YouTube Data API (OAuth 2.0) OpenAI / Gemini / Ollama ๐Ÿงพ Add your video ideas to Google Sheets: Include product name, link, and "To Do" status ๐Ÿ“‚ Import the n8n template and configure API credentials ๐Ÿง  Customize your AI prompt for tone, format, and industry ๐Ÿ•’ Schedule it to run weekly (1 video per week) --- โœ๏ธ How to customize this workflow Swap Google Sheets with Airtable, Notion, or API feeds Modify AI prompts for different use cases: reviews, explainers, tutorials Use D-ID, Synthesia, or your preferred avatar platform Add analytics, thumbnails, or comment automation Localize content for multi-language channels Integrate with Slack, Discord, or Telegram for notifications --- ๐Ÿ“Œ Sticky Notes Included ๐Ÿ“Š Get Partner Idea: Pulls one item from the Google Sheet ๐ŸŒ Fetch Content: Extracts product details via HTTP request ๐Ÿง  AI Script: Generates video transcript using GPT or Gemini ๐ŸŽฅ Video Generation: Sends script to HeyGen, waits for rendering โฌ†๏ธ Upload to YouTube: Uploads video file ๐Ÿง  Metadata Generator: Creates optimized title, tags, description ๐Ÿ—“ Metadata Update: Updates YouTube metadata and sets video to Public ๐Ÿ“‹ Sheet Update: Marks video as published in the Google Sheet ๐Ÿง‘ Human Approval (Optional): Pause & resume on manual review --- ๐ŸŒ Useful Links ๐Ÿง  Mastering n8n on Udemy ๐Ÿ“˜ n8n Learning Guidebook ๐Ÿš€ Sign Up for n8n Cloud (Use Code: AMJID10) ๐Ÿ”ง SyncBricks Automation Blog ๐Ÿ“บ YouTube Channel โ€“ SyncBricks --- ๐Ÿ”— Why this workflow? This advanced automation setup is ideal for users exploring: YouTube automation via n8n and API AI-powered content pipelines with OpenAI/Gemini Avatar video generation (HeyGen / D-ID / Synthesia) Workflow automation for affiliate marketing Full-stack video publishing using no-code tools Enterprise-grade publishing for brands and creators Built with modularity, customization, and full control in mind โ€” whether youโ€™re using n8n Cloud or a self-hosted instance.

Amjid AliBy Amjid Ali
804

Auto-generate SEO meta descriptions & keywords for Magento 2 with GPT-4.1 & LangChain

This workflow intelligently regenerates meta descriptions and meta keywords for Magento 2 product pages using OpenAI and SEO best practices. ๐Ÿ” What It Does: Accepts SKU input via a public form Fetches the product by SKU from your Magento 2 store Extracts existing description, meta description, and keywords Uses a LangChain-powered AI Agent with OpenAI to: Analyze the current product content Generate a high-conversion meta description (150โ€“160 characters) Generate 5โ€“7 optimized SEO meta keywords (while retaining existing ones, avoiding duplicates) Updates the product's metadata in Magento via REST API โš™๏ธ Technical Highlights: Magento 2 REST integration FormTrigger node for user input Add Role so AI know the Tone of you JavaScript code node for safe HTML parsing LangChain AI Agent with OpenAI (GPT-4.1 mini) Structured output parser to format AI response Automatically pushes updated metadata back into Magento ๐Ÿงช Optional: SerpAPI (plug-and-play support included but disabled) can be enabled to bring real-time search trend data into the AI prompt. โœ… Ideal For: Magento 2 store owners, developers, and SEO teams who want to automate metadata updates and boost search performance without touching product pages manually. How This AI Automation Workflow Helps Magento 2 Store Owners Stay Ahead Automates SEO Meta Updates at Scale Manually updating meta descriptions and keywords across hundreds or thousands of products is time-consuming and error-prone. This AI-driven workflow automatically reads the current product descriptions, analyzes SEO opportunities, and generates optimized meta keywords and descriptions โ€” all without manual copywriting. This saves significant time and labor while keeping your SEO metadata fresh. Uses AI to Align with Latest Search Trends The workflow integrates with live search trend data (via SerpAPI or other tools) so that generated meta keywords and descriptions incorporate current popular search queries, trending phrases, and relevant semantic keywords. This ensures your product pages rank better by tapping into what buyers are actively searching for right now. Improves Search Engine Rankings and Click-Through Rates (CTR) Well-optimized meta descriptions that include power words, clear calls to action, and keyword clustering help improve Googleโ€™s understanding of your pages and entice more clicks from search results. This drives more organic traffic and increases the chance visitors convert into customers. Maintains Brand Authority and Trustworthiness The AI prompt is designed to follow Googleโ€™s E-A-T guidelines (Expertise, Authoritativeness, Trustworthiness), ensuring your product metadata builds credibility. Consistent, accurate meta descriptions contribute to a professional brand image and help improve SEO reputation. Keeps Magento Store Metadata Consistent and Up-to-Date Product content changes frequentlyโ€”new features, specifications, or uses arise. This workflow allows easy regeneration of SEO metadata after product updates or on-demand, so your Magento storeโ€™s search snippets never become outdated or irrelevant. Simple Integration with Magento 2 API and Your Existing Workflow Since the automation fetches and updates product meta fields via Magentoโ€™s REST API, it works seamlessly with your current Magento setup without heavy manual work or additional plugins. This smooth integration minimizes technical complexity. Why This Matters in the Current Market SEO remains a top driver of online sales โ€” with evolving algorithms and user search behaviors, staying current and relevant in metadata is essential for visibility. Voice search, mobile search, and semantic search make keyword strategy more complex; AI-powered semantic keyword clustering simplifies this. E-commerce competition is fierce โ€” automated, data-driven SEO optimizations give you an edge over stores relying on static, outdated meta descriptions. Content automation is becoming a necessity โ€” businesses using AI for SEO benefit from faster iteration and more consistent messaging, helping them capture shifting consumer trends. Summary This AI + Automation + Workflow approach enables Magento 2 store owners to continuously and efficiently optimize their product metadata, leveraging live market data and SEO best practices to boost organic rankings, attract more visitors, and increase conversions โ€” all while reducing manual workload.

Kanaka Kishore KandregulaBy Kanaka Kishore Kandregula
72

Upload large files to Dropbox with chunking & web UI progress tracking

Dropbox Large File Upload System How It Works This workflow enables uploading large files (300MB+) to Dropbox through a web interface with real-time progress tracking. It bypasses Dropbox's 150MB single-request limit by breaking files into 8MB chunks and uploading them sequentially using Dropbox's upload session API. Upload Flow: User accesses page - Visits /webhook/upload-page and sees HTML form with file picker and folder path input Selects file - Chooses file and clicks "Upload to Dropbox" button JavaScript initiates session - Calls /webhook/start-session โ†’ Dropbox creates upload session โ†’ Returns sessionId Chunk upload loop - JavaScript splits file into 8MB chunks and for each chunk: Calls /webhook/append-chunk with sessionId, offset, and chunk binary data Dropbox appends chunk to session Progress bar updates (e.g., 25%, 50%, 75%) Finalize upload - After all chunks uploaded, calls /webhook/finish-session with final offset and target path File committed - Dropbox commits all chunks into complete file at specified path (e.g., /Uploads/video.mp4) Why chunking? Dropbox API has a 150MB limit for single upload requests. The upload session API (uploadsession/start, appendv2, finish) allows unlimited file sizes by chunking. Technical Architecture: Four webhook endpoints handle different stages (serve UI, start, append, finish) All chunk data sent as multipart/form-data with binary blobs Dropbox API requires cursor metadata (session_id, offset) in Dropbox-API-Arg header autorename: true prevents file overwrites Setup Steps Time estimate: ~20-25 minutes (first time) Create Dropbox app - Go to Dropbox App Console: Click "Create app" Choose "Scoped access" API Select "Full Dropbox" access type Name your app (e.g., "n8n File Uploader") Under Permissions tab, enable: files.content.write Copy App Key and App Secret Configure n8n OAuth2 credentials - In n8n: Create new "Dropbox OAuth2 API" credential Paste App Key and App Secret Set OAuth Redirect URL to your n8n instance (e.g., https://your-n8n.com/rest/oauth2-credential/callback) Complete OAuth flow to get access token Connect credentials to HTTP nodes - Add your Dropbox OAuth2 credential to these three nodes: "Dropbox Start Session" "Dropbox Append Chunk" "Dropbox Finish Session" Activate workflow - Click "Active" toggle to generate production webhook URLs Customize default folder (optional) - In "Respond with HTML" node: Find line: <input type="text" id="dropboxFolder" value="/Uploads/" ... Change /Uploads/ to your preferred default path Get upload page URL - Copy the production webhook URL from "Serve Upload Page" node (e.g., https://your-n8n.com/webhook/upload-page) Test upload - Visit the URL, select a small file first (~50MB), choose folder path, click Upload Important Notes File Size Limits: Standard Dropbox API: 150MB max per request This workflow: Unlimited (tested with 300MB+ files) Chunk size: 8MB (configurable in HTML JavaScript CHUNK_SIZE variable) Upload Behavior: Files with same name are auto-renamed (e.g., video.mp4 โ†’ video (1).mp4) due to autorename: true Upload is synchronous - browser must stay open until complete If upload fails mid-process, partial chunks remain in Dropbox session (expire after 24 hours) Security Considerations: Webhook URLs are public - anyone with URL can upload to your Dropbox Add authentication if needed (HTTP Basic Auth on webhook nodes) Consider rate limiting for production use Dropbox API Quotas: Free accounts: 2GB storage, 150GB bandwidth/day Plus accounts: 2TB storage, unlimited bandwidth Upload sessions expire after 4 hours of inactivity Progress Tracking: Real-time progress bar shows percentage (0-100%) Status messages: "Starting upload...", "โœ“ Upload complete!", "โœ— Upload failed: [error]" Final response includes file path, size, and Dropbox file ID Troubleshooting: If chunks fail: Check Dropbox OAuth token hasn't expired (refresh if needed) If session not found: Ensure sessionId is passed correctly between steps If finish fails: Verify target path exists and app has write permissions If page doesn't load: Activate workflow first to generate webhook URLs Performance: 8MB chunks = ~37 requests for 300MB file Upload speed depends on internet connection and Dropbox API rate limits Typical: 2-5 minutes for 300MB file on good connection Pro tip: Test with a small file (10-20MB) first to verify credentials and flow, then try larger files. Monitor n8n execution list to see each webhook call and troubleshoot any failures. For production, consider adding error handling and retry logic in the JavaScript.

AnthonyBy Anthony
30
All templates loaded