Generate cinematic videos from text prompts with GPT-4o, Fal.AI Seedance & Audio
Who’s it for? This workflow is built for: AI storytellers, content creators, YouTubers, and short-form video marketers Anyone looking to transform text prompts into cinematic AI-generated videos fully automatically Educators, trainers, or agencies creating story-based visual content at scale --- What It Does This n8n workflow allows you to automatically turn a simple text prompt into a multi-scene cinematic video, using the powerful Fal.AI Seedance V1.0 model (developed by ByteDance — the creators of TikTok). It combines the creativity of GPT-4o, the motion synthesis of Seedance, and the automation power of n8n to generate AI videos with ambient sound and publish-ready format. --- How It Works Accepts a prompt from Google Sheets (configurable fields like duration, aspect ratio, resolution, scene count) Uses OpenAI GPT-4o to write a vivid cinematic narrative Splits the story into n separate scenes For each scene: GPT generates a structured cinematic description (characters, camera, movement, sound) The Seedance V1.0 model (via Fal.AI API) renders a 5s animated video Optional: Adds ambient audio via Fal’s MM-Audio model Finally: Merges all scene videos using Fal’s FFmpeg API Optionally uploads to YouTube automatically --- Why This Is Special Fal.AI Seedance V1.0 is a highly advanced motion video model developed by ByteDance, capable of generating expressive, stylized 5–6 second cinematic clips from text. This workflow supports full looping, scene count validation, and wait-polling for long render jobs. The entire story, breakdown, and scene design are AI-generated — no manual effort needed. Output is export-ready: MP4 with sound, ideal for YouTube Shorts, Reels, or TikTok. --- Requirements n8n (Self-hosted recommended) API Keys: Fal.AI (https://fal.ai) OpenAI (GPT-4o or 3.5) Google Sheets Example Google Sheet --- How to Set It Up Clone the template into your n8n instance Configure credentials: Fal.AI Header Token OpenAI API Key Google Sheets OAuth2 (Optional) YouTube API OAuth Prepare a Google Sheet with these columns: story (short prompt) numberofscene duration (per clip) aspect_ratio, resolution, model Run manually or trigger on Sheet update. --- How to Customize Modify the storytelling tone in GPT prompts (e.g., switch to fantasy, horror, sci-fi) Change Seedance model params like style or seed Add subtitles or branding overlays to final video Integrate LINE, Notion, or Telegram for auto-sharing --- Example Output Prompt: “A rabbit flies to the moon on a dragonfly and eats watermelon together” → Result: 3 scenes, each 5s, cinematic camera pans, soft ambient audio, auto-uploaded to YouTube Result
Create or update a post in WordPress
No description available.
Auto tweet: spreadsheet-to-tweet automation
🎉 Do you want to master AI automation, so you can save time and build cool stuff? I’ve created a welcoming Skool community for non-technical yet resourceful learners. 👉🏻 Join the AI Atelier 👈🏻 --- This is probably the most straightforward way to schedule your tweets. This n8n workflow addresses a simple problem: how to maintain consistency in your publications on X. 📋 Blog post 📺 Youtube Video The workflow is direct and to the point. In a Google spreadsheet, write all your tweets, one tweet per line. Then, at a defined pace (for example, every 6 hours), the workflow will publish the first tweet from the list on your X account and remove it from the sheet. This method is really simple but very efficient if you are looking for a direct and super fast solution to share regularly on X! Created by the n8ninja.
Extract and structure Thai documents to Google Sheets using Typhoon OCR and Llama 3.1
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package and custom command execution. Make sure to install required dependencies locally. --- Who is this for? This template is for developers, operations teams, and automation builders in Thailand (or any Thai-speaking environment) who regularly process PDFs or scanned documents in Thai and want to extract structured text into a Google Sheet. It is ideal for: Local government document processing Thai-language enterprise paperwork AI automation pipelines requiring Thai OCR --- What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text. However, integrating it into an end-to-end workflow usually requires manual scripting and data wrangling. This template solves that by: Running Typhoon OCR on PDF files Using AI to extract structured data fields Automatically storing results in Google Sheets --- What this workflow does Trigger: Run manually or from any automation source Read Files: Load local PDF files from a doc/ folder Execute Command: Run Typhoon OCR on each file using a Python command LLM Extraction: Send the OCR markdown to an AI model (e.g., GPT-4 or OpenRouter) to extract fields Code Node: Parse the LLM output as JSON Google Sheets: Append structured data into a spreadsheet --- Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr Install Poppler and add to system PATH (needed for pdftoppm, pdfinfo) Create folders Create a folder called doc in the same directory where n8n runs (or mount it via Docker) Google Sheet Create a Google Sheet with the following column headers: | book\id | date | subject | detail | signed\by | signed\by2 | contact | download\url | | -------- | ---- | ------- | ------ | ---------- | ----------- | ------- | ------------- | You can use this example Google Sheet as a reference. API Key Export your TYPHOONOCRAPIKEY and OPENAIAPI_KEY in your environment (or set inside the command string in Execute Command node). --- How to customize this workflow Replace the LLM provider in the Basic LLM Chain node (currently supports OpenRouter) Change output fields to match your data structure (adjust the prompt and Google Sheet headers) Add trigger nodes (e.g., Dropbox Upload, Webhook) to automate input --- About Typhoon OCR Typhoon is a multilingual LLM and toolkit optimized for Thai NLP. It includes typhoon-ocr, a Python OCR library designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multilingual documents in Southeast Asia. ---
Create a branded AI chatbot for websites with Flowise multi-agent chatflows
This workflow integrates Flowise Multi-Agent Chatflows into a custom-branded n8n chatbot, enabling real-time interaction between users and AI agents powered by large language models (LLMs). --- Key Advantages: ✅ Easy Integration with Flowise: Uses a low-code HTTP node to send user questions to Flowise's API (/api/v1/prediction/FLOWISE_ID) and receive intelligent responses. Supports multi-agent chatflows, allowing for complex, dynamic interactions. 🎨 Customizable Chatbot UI: Includes pre-built JavaScript for embedding the n8n chatbot into any website. Provides customization options such as welcome messages, branding, placeholder text, chat modes (e.g., popup or embedded), and language support. 🔐 Secure & Configurable: Authorization via Bearer token headers for Flowise API access. Clearly marked notes in the workflow for setting environment variables like FLOWISEURL and FLOWID. --- How It Works Chat Trigger: The workflow starts with the When chat message received node, which acts as a webhook to receive incoming chat messages from users. HTTP Request to Flowise: The received message is forwarded to the Flowise node, which sends a POST request to a Flowise API endpoint (https://FLOWISEURL/api/v1/prediction/FLOWISEID). The request includes the user's input as a JSON payload ({"question": "{{ $json.chatInput }}"}) and uses HTTP header authentication (e.g., Authorization: Bearer FLOWSIEAPI). Response Handling: The response from Flowise is passed to the Edit Fields node, which maps the output ($json.text) for further processing or display. Set Up Steps Configure Flowise Integration: Replace FLOWISEURL and FLOWISE_ID in the HTTP Request node with your Flowise instance URL and flow ID. Ensure the Authorization header is set correctly in the credentials (e.g., Bearer FLOWSIE_API). Embed n8n Chatbot: Use the provided JavaScript snippet in the sticky notes to embed the n8n chatbot on your website. Replace YOURPRODUCTIONWEBHOOK_URL with the webhook URL generated by the When chat message received node. Customize the chatbot's appearance and behavior (e.g., welcome messages, language, UI elements) using the createChat configuration options. Optional Branding: Adjust the sticky note examples to include branding details, such as custom messages, colors, or metadata for the chatbot. Activate Workflow: Toggle the workflow to "Active" in n8n and test the chat functionality end-to-end. --- Ideal Use Cases: Embedding branded AI assistants into websites. Connecting Flowise-powered agents with customer support chatbots. Creating dynamic, smart conversational flows with LLMs via n8n automation. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Index legal documents for hybrid search with Qdrant, OpenAI & BM25
Index Legal Dataset to Qdrant for Hybrid Retrieval *This pipeline is the first part of "Hybrid Search with Qdrant & n8n, Legal AI". The second part, "Hybrid Search with Qdrant & n8n, Legal AI: Retrieval", covers retrieval and simple evaluation.* Overview This pipeline transforms a Q&A legal corpus from Hugging Face (isaacus) into vector representations and indexes them to Qdrant, providing the foundation for running Hybrid Search, combining: Dense vectors (embeddings) for semantic similarity search; Sparse vectors for keyword-based exact search. After running this pipeline, you will have a Qdrant collection with your legal dataset ready for hybrid retrieval on BM25 and dense embeddings: either mxbai-embed-large-v1 or text-embedding-3-small. Options for Embedding Inference This pipeline equips you with two approaches for generating dense vectors: Using Qdrant Cloud Inference, conversion to vectors handled directly in Qdrant; Using external provider, e.g. OpenAI for generating embeddings. Prerequisites A cluster on Qdrant Cloud Paid cluster in the US region if you want to use Qdrant Cloud Inference Free Tier Cluster if using an external provider (here OpenAI) Qdrant Cluster credentials: You'll be guided on how to obtain both the URL and API_KEY from the Qdrant Cloud UI when setting up your cluster; An OpenAI API key (if you’re not using Qdrant’s Cloud Inference); P.S. To ask retrieval in Qdrant-related questions, join the Qdrant Discord. Star Qdrant n8n community node repo <3
Web security scanner for OWASP compliance with Markdown reports
How the n8n OWASP Scanner Works & How to Set It Up How It Works (Simple Flow): Input: Enter target URL + endpoint (e.g., https://example.com, /login) Scan: This workflow executes 5 parallel HTTP tests (Headers, Cookies, CORS, HTTPS, Methods) Analyze: Pure JS logic checks OWASP ASVS (Application Security Verification Standard) rules (no external tools) Merge: Combines all findings into one Markdown report Output: Auto-generates + downloads scan-2025-11-16_210900.md (example filename) Email: (Optional) Forward the report to an email address using Gmail. --- Setup in 3 Steps (2 Minutes) Import Workflow Copy the full JSON (from "Export Final Workflow") In n8n → Workflows → Import from JSON → Paste → Import (Optional) Connect your Gmail credentials In the last node to auto-email the report Click Execute the workflow Enter a URL in the new window, then click 'submit'. You can alternatively download or receive the Markdown report directly from the Markdown to File node --- (Supports any HTTP/HTTPS endpoint. Works in n8n Cloud or self-hosted.)
Run bulk RAG queries from CSV with Lookio
This template processes a CSV of questions and returns an enriched CSV with RAG-based answers produced by your Lookio assistant. Upload a CSV that contains a column named Query, and the workflow will loop through every row, call the Lookio API, and append a Response column containing the assistant's answer. It's ideal for batch tasks like drafting RFP responses, pre-filling support replies, generating knowledge-checked summaries, or validating large lists of product/customer questions against your internal documentation. Who is this for? Knowledge managers & technical writers: Produce draft answers to large question sets using your company docs. Sales & proposal teams: Auto-generate RFP answer drafts informed by internal docs. Support & operations teams: Bulk-enrich FAQs or support ticket templates with authoritative responses. Automation builders: Integrate Lookio-powered retrieval into bulk data pipelines. What it does / What problem does this solve? Automates bulk queries: Eliminates the manual process of running many individual lookups. Ensures answers are grounded: Responses come from your uploaded documents via Lookio, reducing hallucinations. Produces ready-to-use output: Delivers an enriched CSV with a new Response column for downstream use. Simple UX: Users only need to upload a CSV with a Query column and download the resulting file. How it works Form submission: User uploads a CSV via the Form Trigger. Extract & validate: Extract all rows reads the CSV and Aggregate rows checks for a Query column. Per-row loop: Split Out and Loop Over Queries iterate rows; Isolate the Query column normalizes data. Call Lookio: Lookio API call posts each query to your assistant and returns the answer. Build output: Prepare output appends Response values and Generate enriched CSV creates the downloadable file delivered by Form ending and file download. Why use Lookio for high quality RAG? While building a native RAG pipeline in n8n offers granular control, achieving consistently high-quality and reliable results requires significant effort in data processing, chunking strategy, and retrieval logic optimization. Lookio is designed to address these challenges by providing a managed RAG service accessible via a simple API. It handles the entire backend pipeline—from processing various document formats to employing advanced retrieval techniques—allowing you to integrate a production-ready knowledge source into your workflows. This approach lets you focus on building your automation in n8n, rather than managing the complexities of a RAG infrastructure. How to set up Create a Lookio assistant: Sign up at https://www.lookio.app/, upload documents, and create an assistant. Get credentials: Copy your Lookio API Key and Assistant ID. Configure the workflow nodes: In the Lookio API call HTTP Request node, replace the apikey header value with your Lookio API Key and update assistantid with your Assistant ID (replace placeholders like <your-lookio-api-key> and <your-assistant-id>). Ensure the Form Trigger is enabled and accepts a .csv file. CSV format: Ensure the input CSV has a column named Query (case-sensitive as configured). Activate the workflow: Run a test upload and download the enriched CSV. Requirements An n8n instance with the ability to host Forms and run workflows A Lookio account (API Key) and an Assistant ID How to take it further Add rate limiting / retries: Insert error handling and delay nodes to respect API limits for large batches. Improve the speed: You could drastically reduce the processing time by parallelizing the queries instead of doing them one after the other in the loop. For that, you could use HTTP request nodes that would trigger your sort of sub-workflow. Store results: Add an Airtable or Google Sheets node to archive questions and responses for audit and reuse. Post-process answers: Add an LLM node to summarize or standardize responses, or to add confidence flags. Trigger variations: Replace the Form Trigger with a Google Drive or Airtable trigger to process CSVs automatically from a folder or table.
Send chat message notifications from Tawk.to to Gmail
This automation workflow captures incoming chat messages from your Tawk.to live chat widget and sends alert emails via Gmail to notify your support team instantly. It is designed to help you respond promptly to visitors and improve your customer support experience. --- Prerequisites Tawk.to account: You must have an active Tawk.to account with a configured live chat widget on your website. Gmail account: A Gmail account with API access enabled and configured in n8n for sending emails. n8n instance: Access to an n8n workflow automation instance where you will import and configure this workflow. --- Step-by-Step Setup Instructions Configure Tawk.to Webhook Log in to your Tawk.to dashboard. Navigate to Administration > Webhooks. Click Add Webhook and enter the following: URL: Your n8n webhook URL from the Receive Tawk.to Request node (e.g., https://your-n8n-instance.com/webhook/a4bf95cd-a30a-4ae0-bd2a-6d96e6cca3b4) Method: POST Events: Select the chat message event (e.g., Visitor Message or Chat Message Received) Save the webhook configuration. Configure Gmail Credentials in n8n In your n8n instance, go to Credentials. Add a new Gmail OAuth2 credential: Follow Google's instructions to create a project, enable Gmail API, and obtain client ID and secret. Authenticate and authorize n8n to send emails via your Gmail account. Import and Activate Workflow Import the provided workflow JSON into n8n. Verify the Receive Tawk.to Request webhook node path matches the webhook URL configured in Tawk.to. Enter the email address you want the alerts sent to in the Send alert email node’s sendTo parameter. Activate the workflow. --- Workflow Explanation Receive Tawk.to Request: This webhook node listens for POST requests from Tawk.to containing chat message data. Format the message: Extracts relevant data from the incoming payload such as chat ID, visitor name, country, and message text, and assigns them to new fields for easy use downstream. Send alert email: Uses Gmail node to send a notification email to your support team with all relevant chat details formatted in a clear, concise text email. --- Customization Guidance Email Recipient: Update the sendTo field in the Send alert email node to specify your support team’s email address. Email Content: Modify the message template in the Send alert email node’s message parameter to suit your tone or include additional details like timestamps or chat URLs. Additional Processing: You can extend the workflow by adding nodes for logging chats, triggering Slack notifications, or storing messages in a database. --- By following these instructions, your support team will receive immediate email alerts whenever a new chat message arrives on your website, improving response times and customer satisfaction. ---
Daily Magento 2 customer sync to Google Contacts & Sheets without duplicates
Automatically sync newly registered Magento 2 customers to Google Contacts and Google Sheets every 24 hours — with full duplication control and seamless automation. This workflow is a plug-and-play customer contact automation system designed for Magento 2 store owners, marketers, and CRM teams. It fetches customer records registered within the last 24 hours (from 00:00:00 to 23:59:59), checks against an existing Google Sheet to avoid reprocessing, and syncs only the new ones into Google Contacts. This ensures your contact list is always fresh and up to date — without clutter or duplicates. ✅ What This Workflow Does: Automates Customer Syncing Every day, it fetches newly registered Magento 2 customers via API based on the exact date range (midnight to midnight). Deduplicates Using Google Sheets A master Google Sheet tracks already-synced emails. Before adding a customer, the workflow checks this list and skips if already present. Creates Google Contacts Automatically For each unique customer, it creates a new contact in your Google Contacts, saving fields like first name, last name, and email. Logs New Entries to Google Sheets In Google Sheets, it even records magento 2 customer group, createdat, websiteid & store_id After syncing, it adds each new email to the tracking sheet, building a cumulative record of synced contacts. Fully Scheduled & Automated Can be scheduled with the Cron node to run daily (e.g., 12:05 AM) with no manual intervention required. 🔧 Modules Used: HTTP Request (Magento 2 API) Date & Time (for filtering registrations) Google Sheets (for reading/writing synced emails) Google Contacts (for contact creation) Set, IF, and Merge nodes (for control logic) Cron (for scheduling the automation) 💼 Use Cases: Keep your email marketing tools synced with Magento 2 customer data. Build a CRM-friendly contact base in Google Contacts without duplicates. Share customer data with sales or support teams through synced Google Sheets. Reduce manual work and human error in data transfer processes. 🔒 Credentials Required Magento 2 Bearer Auth: Set up as a credential in n8n using your Magento 2 API access token. Google API 📂 Category E-commerce → Magento 2 (Adobe Commerce) 💬 Need Help? 💡 Having trouble setting it up or want to customize this workflow further? Feel free to reach out — I’m happy to help with setup, customization, or Magento 2 API integration issues. Contact: Author 👤 Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
Export WordPress posts with categories and tags to Google Sheets for SEO audits
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.