Anthony
I write code to save you headaches. I post workflows, that I had a hard time with or find was really needed. If you pay for one of my workflows, congratulations... it probably took me 10x the $$ amount in my personal time, figuring the darn thing out.
Templates by Anthony
Upload files to Dropbox and generate direct download links
How It Works This sub-workflow uploads files to Dropbox and returns a direct download link: Upload file - Receives file from parent workflow and uploads to Dropbox Check for existing link - Queries Dropbox API to see if a shared link already exists for this file Create or reuse link - If no link exists, creates a new public shared link; otherwise uses existing one Convert to direct link - Transforms Dropbox's standard sharing URL (dropbox.com) into a direct download URL (dl.dropboxusercontent.com) Return URL - Outputs the final direct download link for use in other workflows Important: File names must be unique, or you'll get links to old files with the same name. Setup Steps Time estimate: ~25-30 minutes (first time) Create Dropbox app - Register at https://www.dropbox.com/developers/apps and get App Key + App Secret. Grant "Files and folders" + "Collaboration" permissions Configure OAuth2 credentials - Add Dropbox OAuth2 credentials in n8n (2 places: "Upload a file" and "List Shared Links" nodes). Set redirect URI to your n8n instance Create data table - Make a table called "cred-Dropbox" with columns: id (value: 1) and token (your access token) Set up token refresh - Deploy the companion "Dropbox Token Refresher" workflow (referenced but not included as its a paid workflow) to auto-refresh tokens Customize upload path - Update the path in "Upload a file" node (currently /Automate/N8N/host/) Test with form - Use the included test workflow to verify everything works Pro tip: Generate your first access token manually in the Dropbox app console to test uploads before setting up auto-refresh.
Publishing videos across multiple platforms with Blotato (Instagram, YouTube)
Description This workflow automates video distribution to 9 social platforms simultaneously using Blotato's API. It includes both a scheduled publisher (checks Google Sheets for videos marked "Ready") and a subworkflow (can be called from other workflows). Perfect for creators and marketers who want to eliminate manual posting across Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, Twitter, Bluesky, and Pinterest. --- How It Works Scheduled Publisher Workflow Schedule Trigger – Runs daily at 10 PM (configurable). Fetch Video – Pulls video URL and description from Google Sheets where "ReadyToPost" = "Ready". Upload to Blotato – Sends video to Blotato's media service. Broadcast to 9 Platforms – Publishes simultaneously to all connected social accounts. Update Sheet – Changes "ReadyToPost" to "Finished" so it won't repost. --- Subworkflow: Video Publisher (Reusable) Receive Input – Gets URL, title, and description from parent workflow. Fetch Credentials – Pulls Blotato API key from n8n Data Table. Upload & Distribute – Uploads to Blotato, then posts to all platforms. Completion Signal – Returns to parent workflow when done. > 💡 Tip: The subworkflow can be called from ANY workflow - great for posting videos generated by AI workflows, webhook triggers, or manual forms. --- Test Workflow (Optional) Form Submission – Upload a video file with title and description. Upload to Dropbox – Generates shareable URL via "[SUB] Dropbox Upload Link" subworkflow. Trigger Publisher – Calls the subworkflow above to distribute the video. --- Setup Instructions Estimated Setup Time: 20-25 minutes Step 1: Blotato Account Setup Create account at Blotato Dashboard Connect all your social media accounts (most time-consuming step) Go to Settings and copy your account IDs for each platform Go to API Settings and copy your API key Step 2: Configure Workflow Update Social IDs: Open "Assign Social Media IDs" node Replace placeholder IDs with your actual Blotato account IDs: json { "instagramid": "YOURID", "youtubeid": "YOURID", "tiktokid": "YOURID", ... } Create Data Table: Create n8n Data Table named "Credentials" Add columns: service and token Add row: service = blotato, token = YOURAPIKEY Set Up Google Sheet: Create sheet with columns: URL VIDEO, ReadyToPost, Description, Titre (Title) Add video data Set ReadyToPost to "Ready" for videos you want to post Connect Your Sheet: Update "Get my video" node with your Google Sheet ID > ⚙️ Pro Tip: If you don't need the scheduled version, just use the subworkflow and call it from other workflows. --- Use Cases AI Video Workflows: Automatically post videos generated by Veo, Sora, or other AI models to all platforms. Content Schedulers: Queue videos in Google Sheets, let the scheduler post them automatically. Batch Publishing: Generate 10 videos, mark them all "Ready", and let the workflow distribute them. Marketing Campaigns: Coordinate multi-platform launches with a single click. Agencies: Manage multiple client accounts by swapping Blotato credentials in the Data Table. --- Customization Options Remove Unused Platforms: Disconnect any social media nodes you don't use (speeds up execution). Change Schedule: Modify the Schedule Trigger to run multiple times per day or on specific days. Different File Hosts: Replace Dropbox with Google Drive, S3, or Cloudinary in the test workflow. Platform-Specific Captions: Add IF nodes before each platform to customize descriptions or add hashtags. Add Approval Step: Insert a WhatsApp or Telegram notification before posting for manual review. Watermarks: Add a Code node to overlay branding before uploading to Blotato. --- Important Notes ⚠️ Two Workflows in One File: Lines 1-600: Scheduled publisher (checks Google Sheets) Lines 600+: Subworkflow (called by other workflows) ⚠️ Data Table vs. Hardcoding: Scheduled workflow: Hardcoded API keys in HTTP nodes Subworkflow: Uses Data Table for API keys (recommended approach) ⚠️ Why Use the Subworkflow? Can be called from ANY workflow Easier to manage API keys (one place to update) More flexible for complex automation systems
Convert websites to audio summaries via WhatsApp using GPT and TTS
How It Works This workflow transforms any webpage into an AI-narrated audio summary delivered via WhatsApp: Receive URL - WhatsApp Trigger captures incoming messages and passes them to URL extraction Extract & validate - Code node extracts URLs using regex and validates format; IF node checks for errors User feedback - Sends either error message ("Please send valid URL") or processing status ("Fetching and analyzing... 10-30 seconds") Fetch webpage - Sub-workflow calls Jina AI Reader (https://r.jina.ai/) to fetch JavaScript-rendered content, bypassing bot blocks Summarize content - GPT-4o-mini processes webpage text in 6000-character chunks, extracts title and generates concise summary Generate audio - OpenAI TTS-1 converts summary text to natural-sounding audio (Opus format for WhatsApp compatibility) Deliver result - WhatsApp node sends audio message back to user with summary Why Jina AI? Many modern websites (like digibyte.io) require JavaScript to load content. Standard HTTP requests only fetch the initial HTML skeleton ("JavaScript must be enabled"). Jina AI executes JavaScript and returns clean, readable text. Setup Steps Time estimate: ~20-25 minutes WhatsApp Business API Setup (10-15 minutes) Create Meta Developer App - Go to https://developers.facebook.com/, create Business app, add WhatsApp product Get Phone Number ID - Use Meta's test number or register your own business phone Generate System User Token - Create at https://business.facebook.com/latest/settings/system_users (permanent token, no 4-hour expiry) Configure Webhook - Point to your n8n instance URL, subscribe to "messages" events Verify business - Meta requires 3-5 verification steps (business, app, phone, system user) Configure n8n Credentials (5 minutes) OpenAI - Add API key in Credentials → OpenAI (used in 2 places: "Convert Summary to Audio" and "OpenAI Chat Model" in sub-workflow) WhatsApp OAuth - Add in WhatsApp Trigger node using System User token from step 1 WhatsApp API - Add in all WhatsApp action nodes (Send Error, Send Processing, Send Audio) using same token Link Sub-Workflow (3 minutes) Ensure "[SUB] Get Webpage Summary" workflow is activated In "Get Webpage Summary" node, select the sub-workflow from dropdown Verify workflow ID matches: QglZjvjdZ16BisPN Update Phone Number IDs (2 minutes) Copy your Phone Number ID from Meta console Update in all WhatsApp nodes: Send Error Message, Send Processing Message, Send Audio Summary Test the Flow (2 minutes) Activate both workflows (sub-workflow first, then main) Send test message to WhatsApp: https://example.com Verify: Processing message arrives → Audio summary delivered within 30 seconds Important Notes WhatsApp Caveats: 24-hour window - Can't auto-message users after 24 hours unless they message first (send "Hi" each morning to reset) Verification fatigue - Meta requires multiple business verifications; budget 30-60 minutes if first time Test vs Production - Test numbers work for single users; production requires business verification Audio Format: Using Opus codec (optimal for WhatsApp, smaller file size than MP3) Speed set to 1.0 (normal pace) - adjust in "Convert Summary to Audio" node if needed Cost: ~$0.015 per minute of audio generated Jina AI Integration: Free tier works for basic use (no API key required) Handles JavaScript-heavy sites automatically Add Authorization: Bearer YOUR_KEY header for higher limits Alternative: Replace with Playwright/Puppeteer for self-hosted rendering Cost Breakdown (per summary): GPT-4o-mini summarization: ~$0.005-0.015 OpenAI TTS audio: ~$0.005-0.015 WhatsApp messages: Free (up to 1,000/month) Total: ~$0.01-0.03 per summary Troubleshooting: "Cannot read properties of undefined" → Status update, not message (code node returns null correctly) "JavaScript must be enabled" → Website needs Jina AI (already implemented in Fetch site texts node) Audio not sending → Check binary data field is named data in TTS node No webhook received → Verify n8n URL is publicly accessible and webhook subscription includes "messages" Pro Tips: Change voice in TTS node: alloy (neutral), echo (male), nova (female), shimmer (soft) Adjust summary length: Modify chunkSize: 6000 in sub-workflow's Text Splitter node (lower = faster but less detailed) Add rate limiting: Insert Code node after trigger to track user requests per hour Store summaries: Add database node after "Clean up" to archive for later retrieval The Use Cases: Executive commuting - Consume industry news hands-free Research students - Cover 3x more sources while multitasking Visually impaired - Access any webpage via natural audio Sales teams - Research prospects on the go Content creators - Monitor competitors while exercising
Upload large files to Dropbox with chunking & web UI progress tracking
Dropbox Large File Upload System How It Works This workflow enables uploading large files (300MB+) to Dropbox through a web interface with real-time progress tracking. It bypasses Dropbox's 150MB single-request limit by breaking files into 8MB chunks and uploading them sequentially using Dropbox's upload session API. Upload Flow: User accesses page - Visits /webhook/upload-page and sees HTML form with file picker and folder path input Selects file - Chooses file and clicks "Upload to Dropbox" button JavaScript initiates session - Calls /webhook/start-session → Dropbox creates upload session → Returns sessionId Chunk upload loop - JavaScript splits file into 8MB chunks and for each chunk: Calls /webhook/append-chunk with sessionId, offset, and chunk binary data Dropbox appends chunk to session Progress bar updates (e.g., 25%, 50%, 75%) Finalize upload - After all chunks uploaded, calls /webhook/finish-session with final offset and target path File committed - Dropbox commits all chunks into complete file at specified path (e.g., /Uploads/video.mp4) Why chunking? Dropbox API has a 150MB limit for single upload requests. The upload session API (uploadsession/start, appendv2, finish) allows unlimited file sizes by chunking. Technical Architecture: Four webhook endpoints handle different stages (serve UI, start, append, finish) All chunk data sent as multipart/form-data with binary blobs Dropbox API requires cursor metadata (session_id, offset) in Dropbox-API-Arg header autorename: true prevents file overwrites Setup Steps Time estimate: ~20-25 minutes (first time) Create Dropbox app - Go to Dropbox App Console: Click "Create app" Choose "Scoped access" API Select "Full Dropbox" access type Name your app (e.g., "n8n File Uploader") Under Permissions tab, enable: files.content.write Copy App Key and App Secret Configure n8n OAuth2 credentials - In n8n: Create new "Dropbox OAuth2 API" credential Paste App Key and App Secret Set OAuth Redirect URL to your n8n instance (e.g., https://your-n8n.com/rest/oauth2-credential/callback) Complete OAuth flow to get access token Connect credentials to HTTP nodes - Add your Dropbox OAuth2 credential to these three nodes: "Dropbox Start Session" "Dropbox Append Chunk" "Dropbox Finish Session" Activate workflow - Click "Active" toggle to generate production webhook URLs Customize default folder (optional) - In "Respond with HTML" node: Find line: <input type="text" id="dropboxFolder" value="/Uploads/" ... Change /Uploads/ to your preferred default path Get upload page URL - Copy the production webhook URL from "Serve Upload Page" node (e.g., https://your-n8n.com/webhook/upload-page) Test upload - Visit the URL, select a small file first (~50MB), choose folder path, click Upload Important Notes File Size Limits: Standard Dropbox API: 150MB max per request This workflow: Unlimited (tested with 300MB+ files) Chunk size: 8MB (configurable in HTML JavaScript CHUNK_SIZE variable) Upload Behavior: Files with same name are auto-renamed (e.g., video.mp4 → video (1).mp4) due to autorename: true Upload is synchronous - browser must stay open until complete If upload fails mid-process, partial chunks remain in Dropbox session (expire after 24 hours) Security Considerations: Webhook URLs are public - anyone with URL can upload to your Dropbox Add authentication if needed (HTTP Basic Auth on webhook nodes) Consider rate limiting for production use Dropbox API Quotas: Free accounts: 2GB storage, 150GB bandwidth/day Plus accounts: 2TB storage, unlimited bandwidth Upload sessions expire after 4 hours of inactivity Progress Tracking: Real-time progress bar shows percentage (0-100%) Status messages: "Starting upload...", "✓ Upload complete!", "✗ Upload failed: [error]" Final response includes file path, size, and Dropbox file ID Troubleshooting: If chunks fail: Check Dropbox OAuth token hasn't expired (refresh if needed) If session not found: Ensure sessionId is passed correctly between steps If finish fails: Verify target path exists and app has write permissions If page doesn't load: Activate workflow first to generate webhook URLs Performance: 8MB chunks = ~37 requests for 300MB file Upload speed depends on internet connection and Dropbox API rate limits Typical: 2-5 minutes for 300MB file on good connection Pro tip: Test with a small file (10-20MB) first to verify credentials and flow, then try larger files. Monitor n8n execution list to see each webhook call and troubleshoot any failures. For production, consider adding error handling and retry logic in the JavaScript.