Jaruphat J.
Project Manager who passionate about Automation & AI and continuously explore innovative ways to improve business processes through intelligent workflow automation. Let’s connect and automate the future!
Templates by Jaruphat J.
Generate video from prompt using Vertex AI Veo 3 and upload to Google Drive
Who’s it for This template is perfect for content creators, AI enthusiasts, marketers, and developers who want to automate the generation of cinematic videos using Google Vertex AI’s Veo 3 model. It’s also ideal for anyone experimenting with generative AI for video using n8n. What it does This workflow: Accepts a text prompt and a GCP access token via form. Sends the prompt to the Veo 3 (preview model) using Vertex AI’s predictLongRunning endpoint. Waits for the video rendering to complete. Fetches the final result and converts the base64-encoded video to a file. Uploads the resulting .mp4 to your Google Drive. Output How to set up Enable Vertex AI API in your GCP project: https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com Authenticate with GCP using Cloud Shell or local terminal: gcloud auth login gcloud config set project [YOURPROJECTID] gcloud auth application-default set-quota-project [YOURPROJECTID] gcloud auth print-access-token Copy the token and use it in the form when running the workflow. ⚠️ This token lasts ~1 hour. Regenerate as needed. Connect your Google Drive OAuth2 credentials to allow file upload. Import this workflow into n8n and execute it via form trigger. Requirements n8n (v1.94.1+) A Google Cloud project with: Vertex AI API enabled Billing enabled A way to get Access Token gcloud auth print-access-token A Google Drive OAuth2 credential connected to n8n How to customize the workflow You can modify the durationSeconds aspectRatio generateAudio in the HTTP node to match your use case. Replace the Google Drive upload node with alternatives like Dropbox, S3, or YouTube upload. Extend the workflow to add subtitles, audio dubbing, or LINE/Slack alerts. Step-by-step for each major node: Prompt Input → Vertex Predict → Wait → Fetch Result → Convert to File → Upload Best Practices Followed No hardcoded API tokens Secure: GCP token is input via form, not stored in workflow All nodes are renamed with clear purpose All editable config grouped in Set node External References GCP Veo API Docs: https://cloud.google.com/vertex-ai/docs/generative-ai/video/overview Disclaimer This workflow uses official Google Cloud APIs and requires a valid GCP project. Access token should be generated securely using gcloud CLI. Do not embed tokens in the workflow itself. Notes on GCP Access Token To use the Vertex AI API in n8n securely: Run the following on your local machine or GCP Cloud Shell: gcloud auth login gcloud config set project your-project-id gcloud auth print-access-token Paste the token in the workflow form field YOURACCESSTOKEN when submitting. Do not hardcode the token into HTTP nodes or Set nodes — input it each time or use a secure credential vault.
Generate cinematic videos from text prompts with GPT-4o, Fal.AI Seedance & Audio
Who’s it for? This workflow is built for: AI storytellers, content creators, YouTubers, and short-form video marketers Anyone looking to transform text prompts into cinematic AI-generated videos fully automatically Educators, trainers, or agencies creating story-based visual content at scale --- What It Does This n8n workflow allows you to automatically turn a simple text prompt into a multi-scene cinematic video, using the powerful Fal.AI Seedance V1.0 model (developed by ByteDance — the creators of TikTok). It combines the creativity of GPT-4o, the motion synthesis of Seedance, and the automation power of n8n to generate AI videos with ambient sound and publish-ready format. --- How It Works Accepts a prompt from Google Sheets (configurable fields like duration, aspect ratio, resolution, scene count) Uses OpenAI GPT-4o to write a vivid cinematic narrative Splits the story into n separate scenes For each scene: GPT generates a structured cinematic description (characters, camera, movement, sound) The Seedance V1.0 model (via Fal.AI API) renders a 5s animated video Optional: Adds ambient audio via Fal’s MM-Audio model Finally: Merges all scene videos using Fal’s FFmpeg API Optionally uploads to YouTube automatically --- Why This Is Special Fal.AI Seedance V1.0 is a highly advanced motion video model developed by ByteDance, capable of generating expressive, stylized 5–6 second cinematic clips from text. This workflow supports full looping, scene count validation, and wait-polling for long render jobs. The entire story, breakdown, and scene design are AI-generated — no manual effort needed. Output is export-ready: MP4 with sound, ideal for YouTube Shorts, Reels, or TikTok. --- Requirements n8n (Self-hosted recommended) API Keys: Fal.AI (https://fal.ai) OpenAI (GPT-4o or 3.5) Google Sheets Example Google Sheet --- How to Set It Up Clone the template into your n8n instance Configure credentials: Fal.AI Header Token OpenAI API Key Google Sheets OAuth2 (Optional) YouTube API OAuth Prepare a Google Sheet with these columns: story (short prompt) numberofscene duration (per clip) aspect_ratio, resolution, model Run manually or trigger on Sheet update. --- How to Customize Modify the storytelling tone in GPT prompts (e.g., switch to fantasy, horror, sci-fi) Change Seedance model params like style or seed Add subtitles or branding overlays to final video Integrate LINE, Notion, or Telegram for auto-sharing --- Example Output Prompt: “A rabbit flies to the moon on a dragonfly and eats watermelon together” → Result: 3 scenes, each 5s, cinematic camera pans, soft ambient audio, auto-uploaded to YouTube Result
Extract and structure Thai documents to Google Sheets using Typhoon OCR and Llama 3.1
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package and custom command execution. Make sure to install required dependencies locally. --- Who is this for? This template is for developers, operations teams, and automation builders in Thailand (or any Thai-speaking environment) who regularly process PDFs or scanned documents in Thai and want to extract structured text into a Google Sheet. It is ideal for: Local government document processing Thai-language enterprise paperwork AI automation pipelines requiring Thai OCR --- What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text. However, integrating it into an end-to-end workflow usually requires manual scripting and data wrangling. This template solves that by: Running Typhoon OCR on PDF files Using AI to extract structured data fields Automatically storing results in Google Sheets --- What this workflow does Trigger: Run manually or from any automation source Read Files: Load local PDF files from a doc/ folder Execute Command: Run Typhoon OCR on each file using a Python command LLM Extraction: Send the OCR markdown to an AI model (e.g., GPT-4 or OpenRouter) to extract fields Code Node: Parse the LLM output as JSON Google Sheets: Append structured data into a spreadsheet --- Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr Install Poppler and add to system PATH (needed for pdftoppm, pdfinfo) Create folders Create a folder called doc in the same directory where n8n runs (or mount it via Docker) Google Sheet Create a Google Sheet with the following column headers: | book\id | date | subject | detail | signed\by | signed\by2 | contact | download\url | | -------- | ---- | ------- | ------ | ---------- | ----------- | ------- | ------------- | You can use this example Google Sheet as a reference. API Key Export your TYPHOONOCRAPIKEY and OPENAIAPI_KEY in your environment (or set inside the command string in Execute Command node). --- How to customize this workflow Replace the LLM provider in the Basic LLM Chain node (currently supports OpenRouter) Change output fields to match your data structure (adjust the prompt and Google Sheet headers) Add trigger nodes (e.g., Dropbox Upload, Webhook) to automate input --- About Typhoon OCR Typhoon is a multilingual LLM and toolkit optimized for Thai NLP. It includes typhoon-ocr, a Python OCR library designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multilingual documents in Southeast Asia. ---
Extract data from Thai Government letters with Mistral OCR and store in Google Sheets
LINE OCR Workflow to Extract and Save Thai Government Letters to Google Sheets This template automates the extraction of structured data from Thai government letters received via LINE or uploaded to Google Drive. It uses Mistral AI for OCR and OpenAI for information extraction, saving results to a Google Sheet. --- Who’s it for? Thai government agencies or teams receiving official documents via LINE or Google Drive Automation developers working with document intake and OCR Anyone needing to extract fields from Thai scanned letters and store structured info --- What it does This n8n workflow: Receives documents from two sources: LINE webhook (via Messaging API) Google Drive (new file trigger) Checks file type (PDF or image) Runs OCR with Mistral AI (Document or Image model) Uses OpenAI to extract key metadata such as: book_id subject recipient (to) signatory date, contact info, etc. Stores structured data in Google Sheets Replies to LINE user with extracted info or moves files into archive folders (Drive) --- How to Set It Up Create a Google Sheet with a tab named data and the following columns Example Google Sheet: bookid, date, subject, to, attach, detail, signedby, signedbyposition, contactphone, contactemail, download_url Set up required credentials: googleDriveOAuth2Api googleSheetsOAuth2Api httpHeaderAuth for LINE Messaging API openAiApi mistralCloudApi Define environment variables: LINECHANNELACCESS_TOKEN GDRIVEINVOICEFOLDER_ID GSHEET_ID MISTRALAPIKEY Deploy webhook to receive files from LINE Messaging API (Path: /line-invoice) Monitor Drive uploads using Google Drive Trigger --- How to Customize the Workflow Adjust the information extraction schema in the OpenAI Information Extractor node to match your document layout Add logic for different document types if you have more than one format Modify the LINE Reply message format or use Flex Message Update the Move File node if you want to archive to a different folder --- Requirements n8n self-hosted or cloud instance Google account with access to Drive and Sheets LINE Developer Account OpenAI API key Mistral Cloud API key --- Notes Community nodes used: @n8n/n8n-nodes-base.mistralAi This workflow supports both document images and PDF files File handling is done dynamically via MIME type
LINE BOT - Google Sheets file lookup with AI agent
This workflow integrates LINE BOT, AI Agent (GPT), Google Sheets, and Google Drive to enable users to search for file URLs using natural language. The AI Agent extracts the filename from the message, searches for the file in Google Sheets, and returns the corresponding Google Drive URL via LINE BOT. Supports natural language queries (e.g., "Find file 1.pdf for me") AI-powered filename extraction Google Sheets Lookup for file URLs Auto-response via LINE BOT How to Use This Template Download & Import Copy and save the Template Code as a .json file. Go to n8n Editor → Click Import → Upload the file. Update Required Fields Replace YOURGOOGLESHEET_ID with your actual Google Sheet ID. Replace YOURLINEACCESS_TOKEN with your LINE BOT Channel Access Token. Activate & Test Click Execute Workflow to test manually. Set Webhook URL in LINE Developer Console. Features of This Template Supports Natural Language Queries (e.g., “Find file 1.pdf for me”) AI-powered filename extraction using OpenAI (GPT-4/3.5) Real-time file lookup in Google Sheets Automatic LINE BOT Response Fully Automated Workflow
Process Thai documents with TyphoonOCR & AI to Google Sheets (multi-page PDF)
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package, pdfseparate from poppler-utils, and custom command execution. Make sure to install all required dependencies locally. --- Who is this for? This template is designed for developers, back-office teams, and automation builders (especially in Thailand or Thai-speaking environments) who need to process multi-file, multi-page Thai PDFs and automatically export structured results to Google Sheets. It is ideal for: Government and enterprise document processing Thai-language invoices, memos, and official letters AI-powered automation pipelines that require Thai OCR --- What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text, but integrating it into an end-to-end workflow usually requires manual scripting and handling multi-page PDFs. This template solves that by: Splitting PDFs into individual pages Running Typhoon OCR on each page Aggregating text back into a single file Using AI to extract structured fields Automatically saving structured data into Google Sheets --- What this workflow does Trigger: Manual execution or any n8n trigger node Load Files: Read PDFs from a local doc/multipage folder Split PDF Pages: Use pdfinfo and pdfseparate to break PDFs into pages Typhoon OCR: Run OCR on each page via Execute Command Aggregate: Combine per-page OCR text LLM Extraction: Use AI (e.g., GPT-4, OpenRouter) to extract fields into JSON Parse JSON: Convert structured JSON into a tabular format Google Sheets: Append one row per file into a Google Sheet Cleanup: Delete temp split pages and move processed PDFs into a Completed folder --- Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr poppler-utils: provides pdfinfo, pdfseparate qpdf: backup page counting Create folders /doc/multipage for incoming files /doc/tmp for split pages /doc/multipage/Completed for processed files Google Sheet Create a Google Sheet with column headers like: bookid | date | subject | to | attach | detail | signedby | signedby2 | contactphone | contactemail | contactfax | download_url API Keys Export your TYPHOONOCRAPIKEY and OPENAIAPI_KEY (or use credentials in n8n) --- How to customize this workflow Replace the LLM provider in the “Structure Text to JSON with LLM” node (supports OpenRouter, OpenAI, etc.) Adjust the JSON schema and parsing logic to match your documents Update Google Sheets mapping to fit your desired fields Add trigger nodes (Dropbox, Google Drive, Webhook) to automate file ingestion --- About Typhoon OCR Typhoon is a multilingual LLM and NLP toolkit optimized for Thai. It includes typhoon-ocr, a Python OCR package designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multi-language documents in Southeast Asia. --- Deployment Option You can also deploy this workflow easily using the Docker image provided in my GitHub repository: https://github.com/Jaruphat/n8n-ffmpeg-typhoon-ollama This Docker setup already includes n8n, ffmpeg, Typhoon OCR, and Ollama combined together, so you can run the whole environment without installing each dependency manually.
Generate AI product ad images from Google Sheets using Fal.ai and OpenAI
⚠️ Note: All sensitive credentials should be set via n8n Credentials or environment variables. Do not hardcode API keys in nodes. --- Who’s it for Marketers, creators, and automation builders who want to generate UGC-style ad images automatically from a Google Sheet. Ideal for e‑commerce SKUs, agencies, or teams that need many variations quickly. --- What it does (Overview) This template turns a spreadsheet row into ad images ready for campaigns. Zone 1 — Create Ad Image: Reads product rows, downloads image, analyzes it, generates prompts, appends results back into Google Sheet. Zone 2 — Create Image (Fal nano‑banana): Generates ad image variations, polls Fal.ai API until done, uploads to Drive, and updates sheet with output URLs. --- Requirements Fal.ai API key (env: FAL\_KEY) Google Sheets / Google Drive OAuth2 credentials OpenAI (Vision/Chat) for image analysis A Google Sheet with columns for product and output Google Drive files set to Anyone with link → Viewer so APIs can fetch them --- How to set up Credentials: Add Google Sheets + Google Drive (OAuth2), Fal.ai (Header Auth with Authorization: Key {{\$env.FAL\_KEY}}), and OpenAI. Google Sheet: Create sheets with the following headers. Sheet: product productid | productname | productimageurl | productdescription | campaign | brandnotes | constraints | numvariations | aspectratio | model_target | status Sheet: ad_image sceneref | productname | prompt | status | output_url Import the workflow: Use the provided JSON. Confirm node credentials resolve. Run: Start with Zone 0 to verify prompt-only flow, then test Zone 1 for image generation. --- Zone 1 — Create Ad Image (Prompt-only) Reads product row, normalizes Drive link, analyzes image, generates structured ad prompts, appends to ad_image sheet. --- Zone 2 — Create Image (Fal nano‑banana) Reads product row, converts Drive link, generates image(s) with Fal nano‑banana, polls until complete, uploads to Drive, updates sheet. --- Node settings (high‑level) Drive Link Parser (Set) js {{ (() => { const u = $json.product || ''; const q = u.match(/[?&]id=([\-\w]{25,})/); const d = u.match(/\/d\/([\-\w]{25,})/); const any = u.match(/[\-\w]{25,}/); const id = q?.[1] || d?.[1] || (any ? any[0] : ''); return id ? 'https://drive.google.com/uc?export=view&id=' + id : ''; })() }} --- How to customize the workflow Adjust AI prompts to change ad style (luxury, cozy, techy). Change aspect ratio for TikTok/IG/Shorts (9:16, 1:1, 16:9). Extend Sheet schema for campaign labels, audiences, hashtags. Add distribution (Slack/LINE/Telegram) after Drive upload. --- Troubleshooting JSON parameter needs to be valid JSON → Ensure expressions return objects, not strings. 403 on images → Make Drive files public (Viewer) and convert links. Job never completes → Check status_url, retry with -fast models or off‑peak times. --- Template metadata Uses: Google Sheets, Google Drive, HTTP Request, Wait/If/Switch, Code, OpenAI Vision/Chat, Fal.ai models (nano‑banana) --- Visuals Workflow Diagram Example Product Image Product Image - nano Banana