Create or update a post in WordPress
No description available.
Extract and structure Thai documents to Google Sheets using Typhoon OCR and Llama 3.1
⚠️ Note: This template requires a community node and works only on self-hosted n8n installations. It uses the Typhoon OCR Python package and custom command execution. Make sure to install required dependencies locally. --- Who is this for? This template is for developers, operations teams, and automation builders in Thailand (or any Thai-speaking environment) who regularly process PDFs or scanned documents in Thai and want to extract structured text into a Google Sheet. It is ideal for: Local government document processing Thai-language enterprise paperwork AI automation pipelines requiring Thai OCR --- What problem does this solve? Typhoon OCR is one of the most accurate OCR tools for Thai text. However, integrating it into an end-to-end workflow usually requires manual scripting and data wrangling. This template solves that by: Running Typhoon OCR on PDF files Using AI to extract structured data fields Automatically storing results in Google Sheets --- What this workflow does Trigger: Run manually or from any automation source Read Files: Load local PDF files from a doc/ folder Execute Command: Run Typhoon OCR on each file using a Python command LLM Extraction: Send the OCR markdown to an AI model (e.g., GPT-4 or OpenRouter) to extract fields Code Node: Parse the LLM output as JSON Google Sheets: Append structured data into a spreadsheet --- Setup Install Requirements Python 3.10+ typhoon-ocr: pip install typhoon-ocr Install Poppler and add to system PATH (needed for pdftoppm, pdfinfo) Create folders Create a folder called doc in the same directory where n8n runs (or mount it via Docker) Google Sheet Create a Google Sheet with the following column headers: | book\id | date | subject | detail | signed\by | signed\by2 | contact | download\url | | -------- | ---- | ------- | ------ | ---------- | ----------- | ------- | ------------- | You can use this example Google Sheet as a reference. API Key Export your TYPHOONOCRAPIKEY and OPENAIAPI_KEY in your environment (or set inside the command string in Execute Command node). --- How to customize this workflow Replace the LLM provider in the Basic LLM Chain node (currently supports OpenRouter) Change output fields to match your data structure (adjust the prompt and Google Sheet headers) Add trigger nodes (e.g., Dropbox Upload, Webhook) to automate input --- About Typhoon OCR Typhoon is a multilingual LLM and toolkit optimized for Thai NLP. It includes typhoon-ocr, a Python OCR library designed for Thai-centric documents. It is open-source, highly accurate, and works well in automation pipelines. Perfect for government paperwork, PDF reports, and multilingual documents in Southeast Asia. ---
Index legal documents for hybrid search with Qdrant, OpenAI & BM25
Index Legal Dataset to Qdrant for Hybrid Retrieval *This pipeline is the first part of "Hybrid Search with Qdrant & n8n, Legal AI". The second part, "Hybrid Search with Qdrant & n8n, Legal AI: Retrieval", covers retrieval and simple evaluation.* Overview This pipeline transforms a Q&A legal corpus from Hugging Face (isaacus) into vector representations and indexes them to Qdrant, providing the foundation for running Hybrid Search, combining: Dense vectors (embeddings) for semantic similarity search; Sparse vectors for keyword-based exact search. After running this pipeline, you will have a Qdrant collection with your legal dataset ready for hybrid retrieval on BM25 and dense embeddings: either mxbai-embed-large-v1 or text-embedding-3-small. Options for Embedding Inference This pipeline equips you with two approaches for generating dense vectors: Using Qdrant Cloud Inference, conversion to vectors handled directly in Qdrant; Using external provider, e.g. OpenAI for generating embeddings. Prerequisites A cluster on Qdrant Cloud Paid cluster in the US region if you want to use Qdrant Cloud Inference Free Tier Cluster if using an external provider (here OpenAI) Qdrant Cluster credentials: You'll be guided on how to obtain both the URL and API_KEY from the Qdrant Cloud UI when setting up your cluster; An OpenAI API key (if you’re not using Qdrant’s Cloud Inference); P.S. To ask retrieval in Qdrant-related questions, join the Qdrant Discord. Star Qdrant n8n community node repo <3
Analyze feedback and send a message on Mattermost
This workflow analyzes the sentiments of the feedback provided by users and sends them to a Mattermost channel. Typeform Trigger node: Whenever a user submits a response to the Typeform, the Typeform Trigger node will trigger the workflow. The node returns the response that the user has submitted in the form. Google Cloud Natural Language node: This node analyses the sentiment of the response the user has provided and gives a score. IF node: The IF node uses the score provided by the Google Cloud Natural Language node and checks if the score is negative (smaller than 0). If the score is negative we get the result as True, otherwise False. Mattermost node: If the score is negative, the IF node returns true and the true branch of the IF node is executed. We connect the Mattermost node with the true branch of the IF node. Whenever the score of the sentiment analysis is negative, the node gets executed and a message is posted on a channel in Mattermost. NoOp: This node here is optional, as the absence of this node won't make a difference to the functioning of the workflow. This workflow can be used by Product Managers to analyze the feedback of the product. The workflow can also be used by HR to analyze employee feedback. You can even use this node for sentiment analysis of Tweets. To perform a sentiment analysis of Tweets, replace the Typeform Trigger node with the Twitter node. Note: You will need a Trigger node or Start node to start the workflow. Instead of posting a message on Mattermost, you can save the results in a database or a Google Sheet, or Airtable. Replace the Mattermost node with (or add after the Mattermost node) the node of your choice to add the result to your database. You can learn to build this workflow on the documentation page of the Google Cloud Natural Language node.
Web security scanner for OWASP compliance with Markdown reports
How the n8n OWASP Scanner Works & How to Set It Up How It Works (Simple Flow): Input: Enter target URL + endpoint (e.g., https://example.com, /login) Scan: This workflow executes 5 parallel HTTP tests (Headers, Cookies, CORS, HTTPS, Methods) Analyze: Pure JS logic checks OWASP ASVS (Application Security Verification Standard) rules (no external tools) Merge: Combines all findings into one Markdown report Output: Auto-generates + downloads scan-2025-11-16_210900.md (example filename) Email: (Optional) Forward the report to an email address using Gmail. --- Setup in 3 Steps (2 Minutes) Import Workflow Copy the full JSON (from "Export Final Workflow") In n8n → Workflows → Import from JSON → Paste → Import (Optional) Connect your Gmail credentials In the last node to auto-email the report Click Execute the workflow Enter a URL in the new window, then click 'submit'. You can alternatively download or receive the Markdown report directly from the Markdown to File node --- (Supports any HTTP/HTTPS endpoint. Works in n8n Cloud or self-hosted.)
Send chat message notifications from Tawk.to to Gmail
This automation workflow captures incoming chat messages from your Tawk.to live chat widget and sends alert emails via Gmail to notify your support team instantly. It is designed to help you respond promptly to visitors and improve your customer support experience. --- Prerequisites Tawk.to account: You must have an active Tawk.to account with a configured live chat widget on your website. Gmail account: A Gmail account with API access enabled and configured in n8n for sending emails. n8n instance: Access to an n8n workflow automation instance where you will import and configure this workflow. --- Step-by-Step Setup Instructions Configure Tawk.to Webhook Log in to your Tawk.to dashboard. Navigate to Administration > Webhooks. Click Add Webhook and enter the following: URL: Your n8n webhook URL from the Receive Tawk.to Request node (e.g., https://your-n8n-instance.com/webhook/a4bf95cd-a30a-4ae0-bd2a-6d96e6cca3b4) Method: POST Events: Select the chat message event (e.g., Visitor Message or Chat Message Received) Save the webhook configuration. Configure Gmail Credentials in n8n In your n8n instance, go to Credentials. Add a new Gmail OAuth2 credential: Follow Google's instructions to create a project, enable Gmail API, and obtain client ID and secret. Authenticate and authorize n8n to send emails via your Gmail account. Import and Activate Workflow Import the provided workflow JSON into n8n. Verify the Receive Tawk.to Request webhook node path matches the webhook URL configured in Tawk.to. Enter the email address you want the alerts sent to in the Send alert email node’s sendTo parameter. Activate the workflow. --- Workflow Explanation Receive Tawk.to Request: This webhook node listens for POST requests from Tawk.to containing chat message data. Format the message: Extracts relevant data from the incoming payload such as chat ID, visitor name, country, and message text, and assigns them to new fields for easy use downstream. Send alert email: Uses Gmail node to send a notification email to your support team with all relevant chat details formatted in a clear, concise text email. --- Customization Guidance Email Recipient: Update the sendTo field in the Send alert email node to specify your support team’s email address. Email Content: Modify the message template in the Send alert email node’s message parameter to suit your tone or include additional details like timestamps or chat URLs. Additional Processing: You can extend the workflow by adding nodes for logging chats, triggering Slack notifications, or storing messages in a database. --- By following these instructions, your support team will receive immediate email alerts whenever a new chat message arrives on your website, improving response times and customer satisfaction. ---
Automate job matching with Gemini AI, Decodo scraping & resume analysis to Telegram
AI Job Matcher with Decodo, Gemini AI & Resume Analysis Sign up for Decodo — get better pricing here Who’s it for This workflow is built for job seekers, recruiters, founders, automation builders, and data engineers who want to automate job discovery and intelligently match job listings against resumes using AI. It’s ideal for anyone building job boards, candidate matching systems, hiring pipelines, or personal job alert automations using n8n. What this workflow does This workflow automatically scrapes job listings from SimplyHired using Decodo residential proxies, extracts structured job data with a Gemini AI agent, downloads resumes from Google Drive, extracts and summarizes resume content, and surfaces the most relevant job opportunities. The workflow stores structured results in a database and sends real-time notifications via Telegram, creating a scalable and low-maintenance AI-powered job matching pipeline. How it works A schedule trigger starts the workflow automatically Decodo fetches job search result pages from SimplyHired Job card HTML is extracted from the page A Gemini AI agent converts raw HTML into structured job data Resume PDFs are downloaded from Google Drive Resume text is extracted from PDF files A Gemini AI agent summarizes key resume highlights Job and resume data are stored in a database Matching job alerts are sent via Telegram How to set up Add your Decodo API credentials Add your Google Gemini API key Connect Google Drive for resume access Configure your Telegram bot Set up your database (Google Sheets by default) Update the job search URL with your keywords and location Requirements Self-hosted n8n instance Decodo account (community node) Google Gemini API access Google Drive access Telegram Bot token Google Sheets or another database > Note: This template uses a community node (Decodo) and is intended for self-hosted n8n only. How to customize the workflow Replace SimplyHired with another job board or aggregator Add job–resume matching or scoring logic Extend the resume summary with custom fields Swap Google Sheets for PostgreSQL, Supabase, or Airtable Route notifications to Slack, Email, or Webhooks Add pagination or multi-resume processing