Intelligent email organization with AI-powered content classification for Gmail
This workflow leverages AI to intelligently analyze incoming Gmail messages and automatically apply relevant labels based on the email content. The default configuration includes the following labels: Newsletter: Subscription updates or promotional content. Inquiry: Emails requesting information or responses. Invoice: Billing and payment-related emails. Proposal: Business offers or collaboration opportunities. Action Required: Emails demanding immediate tasks or actions. Follow-up Reminder: Emails prompting follow-up actions. Task: Emails containing actionable tasks. Personal: Non-work-related emails. Urgent: Time-sensitive or critical communications. Bank: Banking alerts and financial statements. Job Update: Recruitment or job-related communications. Spam/Junk: Unwanted or irrelevant bulk emails. Social/Networking: Notifications from social platforms. Receipt: Purchase confirmations and receipts. Event Invite: Invitations or calendar-related messages. Subscription Renewal: Reminders for subscription expirations. System Notification: Technical alerts from services or systems. You can customize labels and definitions based on your specific use case. How it works: The workflow periodically retrieves new Gmail messages. Only emails without existing labels, regardless of read status, are sent to the AI for analysis. Email content (subject and body) is analyzed by an AI model to determine the appropriate label. Labels identified by the AI are applied to each email accordingly. Note: This workflow performs 100% better than the default Gmail trigger method, which is why the workflow was switched from Gmail trigger to a scheduled workflow. By selectively processing only unlabeled emails, it ensures comprehensive labeling while significantly reducing AI processing costs. Setup Steps: Configure credentials for Gmail and your chosen AI service (e.g., OpenAI). Ensure labels exist in your Gmail account matching the workflow definitions. Adjust the AI prompt to match your labeling needs. Optionally customize the polling interval (default: every 2 minutes). This workflow streamlines your email management, keeping your inbox organized effortlessly while optimizing resource usage.
WebSecScan: AI-powered website security auditor
WebSecScan: AI-Powered Website Security Auditor This n8n workflow provides comprehensive website security analysis by leveraging OpenAI's models to detect vulnerabilities, configuration issues, and security misconfigurations. The workflow generates a professional HTML security report delivered directly via Gmail. Key Features Dual-Layer Security Analysis: Performs parallel security audits using specialized OpenAI agents: Header Configuration Audit: Analyzes HTTP headers, CORS policies, CSP implementation, and cookie security Vulnerability Assessment: Identifies XSS vectors, information disclosure, and client-side weaknesses Detailed Security Grading: Automatically calculates a security grade (A+ to F) based on findings severity and quantity Professional Report Generation: Creates a comprehensive HTML report with: Security grade visualization Color-coded vulnerability categories Detailed recommendations with example configuration fixes Header presence/absence indicators Implementation guidance for remediation Non-Invasive Testing: Performs analysis without active scanning or exploitation attempts Technical Implementation Multi-Agent Architecture: Utilizes two specialized OpenAI agents with custom prompts tailored for security analysis Advanced Header Analysis: Detects presence and proper implementation of critical security headers: Content-Security-Policy Strict-Transport-Security X-Content-Type-Options X-Frame-Options Referrer-Policy Permissions-Policy Intelligent Issue Detection: Uses JavaScript processing to analyze OpenAI outputs and count critical/warning issues Responsive HTML Report: Dynamically generates a mobile-friendly report with detailed findings and recommendations Setup Requirements OpenAI API Configuration Create an OpenAI API key at platform.openai.com In n8n, go to Settings → Credentials → New → OpenAI API Enter your API key and save Gmail Integration Navigate to Settings → Credentials → New → Gmail OAuth2 API Complete the OAuth authentication flow Configure recipient email in the "Send Security Report" node Workflow Customization (Optional) Modify the form title/description in the Landing Page node Upgrade from gpt-4o-mini to gpt-4o for more comprehensive analysis Add additional recipients to the email report Usage Instructions Activate the workflow and access the form via the generated URL Enter any website URL to analyze (including the http:// or https:// prefix) Receive a detailed security report via email within minutes Share findings with your development team to implement fixes --- This workflow represents a non-invasive security assessment tool. For production environments, complement with professional penetration testing services.
Ai data extraction with dynamic prompts and Airtable
This n8n template introduces the Dynamic Prompts Ai workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the video demo I did for n8n Studio here: https://www.youtube.com/watch?v=_fNAD1u8BZw Check out the example Airtable here: https://airtable.com/appAyH3GCBJ56cfXl/shrXzR1Tj99kuQbyL Looking for the Baserow Version? https://n8n.io/workflows/2780-ai-data-extraction-with-dynamic-prompts-and-baserow/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Airtable fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to run the "create airtable webhooks" mini-flow to configure your Airtable to send change events to the n8n template. This mini-flow exists in the template but you'll have to update the IDs. Check the template for more instructions. Requirements Airtable for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
Extract and organize Colombian invoices with Gmail, GPT-4o & Google Workspace
🧾 Personal Invoice Processor This N8N workflow automates the extraction and organization of personal invoices in Colombia received via Gmail. It includes the following key steps: 🔁 Flow Summary Email Trigger Polls Gmail every 30 minutes for emails with .zip attachments (assumed to contain invoices). Expects ZIP file following DIAN standards. ZIP File Handling Extracts all files. Filters only PDF and XML files for processing. Data Extraction & Processing Uses LangChain Agent + OpenAI (GPT-4o-mini) to extract: Tipo de documento (Factura / Nota Crédito) Número de factura Fecha de emisión (YYYY-MM-DD) NIT emisor y receptor (sin dígito de verificación) Razón social del emisor Subtotal, IVA, Total CUFE Resumen de compra (max 20 words, formatted sentence) Validation Ensures Total = Subtotal + IVA using a calculator node. Storage Uploads the original PDF to Google Drive. Renames the file to: YYYY-MM-DD-NUMERO_FACTURA.pdf. Inserts or updates invoice details in Google Sheets using a unique Key (NITEmisor + NumeroFactura) to prevent duplication. --- > ⚙️ Designed for personal use with minimal latency tolerance and high automation reliability.
Convert web page to PDF using ConvertAPI
Who is this for? For developers and organizations that need to convert web page to PDF. What problem is this workflow solving? The web page conversion to PDF problem. What this workflow does Converts web page to PDF. Stores the PDF file in the local file system. How to customize this workflow to your needs Open the HTTP Request node. Adjust the URL parameter (all endpoints can be found here). Use your API Token for authentication. Pass the token in the Authorization header as a Bearer token. You can manage your API Tokens in the User panel → Authentication. Change the parameter url to the webpage you want to convert to pdf Optionally, additional Body Parameters can be added for the converter.
Comprehensive SEO Audit with GPT-4 Specialists using Analytics, Search Console & PageSpeed
🤖 Automated SEO Audit with a Team of AI Specialists This workflow performs a comprehensive, automated monthly SEO and performance audit for any website. It uses a "team" of specialized AI agents to analyze data from multiple sources, aggregates their findings, and generates a final strategic report. Every month, it automatically fetches data from Google Analytics, Google Search Console, and Google PageSpeed Insights, and also performs a live crawl of the target website's homepage. Key Features Fully Automated: Runs on a schedule to deliver monthly reports without manual intervention. Multi-Source Analysis: Gathers data from four key marketing sources for a 360° view. AI Agent Team: Uses a sophisticated multi-agent system where each AI specializes in one area (Analytics, Performance, Technical SEO). Master Analyst: A final AI agent synthesizes all specialist reports into a single, actionable strategic plan. Automated Storage: All individual and final reports are automatically saved to a designated Google Sheet. --- ⚙️ Setup Instructions To use this template, you must configure your credentials and set your target website. Set Your Target Domain (Crucial!): Find the Set Target Website node at the beginning of the workflow. In the "Value" field, replace https://www.your-website.com with the URL of the website you want to audit. This will update the URL across the entire workflow automatically. Configure the Schedule Trigger: Click on the Schedule Trigger node to set when you want the monthly report to run. Connect Your Google Credentials: Google Analytics: Select your credential in the Get a report node. Google Search Console: Select your credential in the Search Console (HTTP Request) node. Google Sheets: Select your credential in all* Google Sheets nodes. Google PageSpeed API Key: Go to the "Credentials" tab in n8n and create a new "Generic Credential" with the type "API Key - Query Param". Name it Google API Key. The "Parameter Name" must be key. Paste your PageSpeed API key into the "API Key" field. Go back to the PageSpeed Insight node, select "API Key - Query Param" for Authentication, and choose your new credential. Connect OpenAI Credentials: This template uses multiple OpenAI Chat Model nodes. Configure each one with your OpenAI API key. Set Your Google Sheet: In each of the Google Sheets nodes, replace the hardcoded "Document ID" with the ID of your own Google Sheet where you want to store the reports. --- 🔬 Workflow Explained Phase 1: Data Collection: The Schedule Trigger starts the workflow. Four parallel branches collect data from Google Analytics, PageSpeed Insights, Search Console, and a direct website crawl. Phase 2: Data Processing & Specialist Analysis: Each data source is processed by a dedicated Code node to format the data. The formatted data is then sent to a specialized AI agent (ANALYTICS SPECIALIST, PERFORMANCE SPECIALIST, etc.) for in-depth analysis. Phase 3: Report Aggregation: A Merge node waits for all four specialist reports to be completed. A DATA AGGREGATOR node then combines them into a single, comprehensive package. Phase 4: Master Synthesis & Storage: The final MASTER ANALYST agent receives the aggregated data and produces a high-level strategic summary with actionable recommendations. This final report is then saved to Google Sheets.
From sitemap crawling to vector storage: Creating an efficient workflow for RAG
This template crawls a website from its sitemap, deduplicates URLs in Supabase, scrapes pages with Crawl4AI, cleans and validates the text, then stores content + metadata in a Supabase vector store using OpenAI embeddings. It’s a reliable, repeatable pipeline for building searchable knowledge bases, SEO research corpora, and RAG datasets. ⸻ Good to know • Built-in de-duplication via a scrape_queue table (status: pending/completed/error). • Resilient flow: waits, retries, and marks failed tasks. • Costs depend on Crawl4AI usage and OpenAI embeddings. • Replace any placeholders (API keys, tokens, URLs) before running. • Respect website robots/ToS and applicable data laws when scraping. How it works Sitemap fetch & parse — Load sitemap.xml, extract all URLs. De-dupe — Normalize URLs, check Supabase scrape_queue; insert only new ones. Scrape — Send URLs to Crawl4AI; poll task status until completed. Clean & score — Remove boilerplate/markup, detect content type, compute quality metrics, extract metadata (title, domain, language, length). Chunk & embed — Split text, create OpenAI embeddings. Store — Upsert into Supabase vector store (documents) with metadata; update job status. Requirements • Supabase (Postgres + Vector extension enabled) • Crawl4AI API key (or header auth) • OpenAI API key (for embeddings) • n8n credentials set for HTTP, Postgres/Supabase How to use Configure credentials (Supabase/Postgres, Crawl4AI, OpenAI). (Optional) Run the provided SQL to create scrape_queue and documents. Set your sitemap URL in the HTTP Request node. Execute the workflow (manual trigger) and monitor Supabase statuses. Query your documents table or vector store from your app/RAG stack. Potential Use Cases This automation is ideal for: Market research teams collecting competitive data Content creators monitoring web trends SEO specialists tracking website content updates Analysts gathering structured data for insights Anyone needing reliable, structured web content for analysis Need help customizing? Contact me for consulting and support: LinkedIn
Deploy Docker MinIO, API backend for WHMCS/WISECP
Setting up n8n workflow Overview The Docker MinIO WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974273723.png) You need to manually import this template into your n8n server. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741284912356.png) n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974396480.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974500641.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285036996.png) Create an SSH Credential for accessing a server with Docker installed. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285118412.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285147192.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285198822.png) Modify Template Parameters In the Parameters block of the template, update the following settings: [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974559641.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285412110.png) server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875704524.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974602887.png) nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875960357.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741974633761.png) Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741876353319.png)
Smart chat routing system with Gemini AI and Notion for customer support
Pre‑Built AI Customer Service System for Businesses | n8n, Gemini & Notion 💥 What It Does Revolutionize your client interactions with this Done‑For‑You AI Customer Service & Lead Routing System. This advanced n8n workflow, powered by Google Gemini and integrated with Notion, is pre-configured and ready to deploy, instantly transforming how you handle inquiries. Stop losing valuable time to manual support and inefficient lead qualification; this system intelligently routes messages, retrieves information from your Notion database, and provides personalized assistance from day one. It's the ultimate shortcut to professional, scalable customer engagement and lead conversion, delivered as a fully set up automation. --- ⚙️ Key Features ⚡ Instant AI Lead Routing:* Automatically classifies incoming messages (customer service, questions, booking) and directs them to the right AI agent for a seamless user experience. 🧠 Multi-Agent AI System:* Includes specialized AI agents for comprehensive customer support, product/service inquiries, and automated consultation booking. 💡 Notion-Powered Knowledge Base:* Leverages your existing Notion databases to pull accurate, contextual information for personalized responses and solutions. 🤝 Personalized Customer Support:* The Customer Service Agent accesses Notion CRM to provide tailored support based on customer history and previous interactions. 📈 Automated Consultation Booking:* The Booking Agent streamlines scheduling by guiding users to your intake forms, qualifying leads effortlessly. --- 😩 Pain Points Solved Sick of wasting countless hours on manual customer service inquiries and support? Tired of slow response times costing you valuable leads and frustrating clients? Struggling to build a complex AI chatbot system from scratch with no prior experience? Overwhelmed by disorganized customer data and scattered product information? Missing out on potential sales opportunities due to inefficient lead qualification processes? --- 📦 What’s Included Fully configured n8n AI Chatbot workflow for instant deployment Pre-integrated Google Gemini language models and AI agents Ready-to-connect Notion CRM and knowledge base tools Comprehensive, step-by-step deployment and launch guide Ongoing access to future updates and enhancements --- 🚀 Call to Action Launch your AI customer powerhouse today. No setup, no stress, just instant results. --- 🏷️ Optimized Tags done for you ai, n8n workflow, ai chatbot, customer service automation, lead qualification, notion integration, google gemini, pre built system, ai agent, business automation, digital product, ready to use, instant deploy
Beginner's Guide to Workflow Automation with OpenAI, LangChain & API Integrations
How it works This beginner-friendly workflow demonstrates the core building blocks of n8n. It guides you through: Triggers – Start workflows manually, on a schedule, via webhooks, or through chat. Data processing – Use Set and Code nodes to create, transform, and enrich data. Logic and branching – Apply conditions with IF nodes and merge different branches back together. API integrations – Fetch external data (e.g., users from an API), split arrays into individual items, and extract useful fields. AI-powered steps – Connect to OpenAI for generating fun facts or build interactive assistants with chat triggers, memory, and tools. Responses – Return structured results via webhooks or summary nodes. By the end, it demonstrates a full flow: creating data → transforming it → making decisions → calling APIs → using AI → responding with outputs. Set up steps Time required: 5–10 minutes. What you need: An n8n instance (cloud or self-hosted). Optional: API credentials (e.g., OpenAI) if you want to test AI features. Setup flow: Import this workflow. Add your API keys where needed (OpenAI, etc.). Trigger the workflow manually or test with webhooks. >👉 Detailed node explanations and examples are already included as sticky notes inside the workflow itself, so you can learn step by step as you explore.
Turn text into cinematic videos using GPT-4, Dumpling FLUX.1 Pro, and Veo 3
🧾 What this workflow does This automation transforms a short text idea into a cinematic video by chaining together three powerful AI services: GPT-4.1 for scene creation, Dumpling AI’s FLUX.1 Pro model for visual generation, and KIE API (Veo 3) for cinematic video creation. The system fully automates the journey from raw concept to video output, returning the final video URL. --- 👤 Who is this for Content creators who want to test visual ideas quickly Agencies creating moodboards, ad scenes, or pitch visuals Solo marketers or founders without video editing skills AI automation builders creating content tools --- ⚙️ How to set up ✅ Requirements OpenAI GPT-4.1 API key Dumpling AI (FLUX.1 Pro model) API token KIE API account with access to the Veo 3 endpoint Optional: Tool to store or share the final video link (e.g., Google Sheets, Slack) --- 🔧 Setup steps Start with a text idea Example: “A lion running through misty mountains at sunrise.” GPT-4.1 Node: Expands the idea into two parts: Cinematic Prompt: Describes the atmosphere, emotion, and camera movement. Image Prompt: A vivid single-frame visual to generate the base image. Dumpling AI Node (FLUX.1 Pro): Takes the image prompt and returns a cinematic-style image. You can customize dimensions, steps, seed, and guidance level. KIE API Node (Veo 3): Sends both the cinematic prompt and image to the Veo 3 model. The model returns a video URL (e.g., 3–6 seconds cinematic footage). Final Output: The video URL is returned by the HTTP node. You can connect this to Airtable, Slack, Telegram, or Google Sheets to log the result or share with your team. --- 🧠 How it works You input a text idea GPT-4.1 turns it into both a detailed cinematic prompt and a base image prompt Dumpling AI generates the image KIE API’s Veo 3 turns it into a cinematic video The final video URL is returned for download or embedding --- 💡 Customization Ideas Add a Telegram bot to trigger this workflow with an idea via message Route video links to a Notion database or content calendar Add a loop with rating logic to regenerate low-rated videos Use Google Drive to auto-save videos in brand folders Automate weekly video ideation for social media using a prompt list --- This workflow helps you turn raw imagination into cinematic motion using AI. From social content to storyboarding, you can generate compelling visuals in minutes — no design or video background needed.
Create a domains-index API server with full operation access for AI agents
Complete MCP server exposing 14 Domains-Index API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Domains-Index API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Domains-Index API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to /v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Domains (9 endpoints) • GET /domains/search: Domains Database Search • GET /domains/tld/{zone_id}: Get TLD records • GET /domains/tld/{zone_id}/download: Download Whole Dataset for TLD • GET /domains/tld/{zone_id}/search: Domains Search for TLD • GET /domains/updates/added: Get added domains, latest if date not specified • GET /domains/updates/added/download: Download added domains, latest if date not specified • GET /domains/updates/deleted: Get deleted domains, latest if date not specified • GET /domains/updates/deleted/download: Download deleted domains, latest if date not specified • GET /domains/updates/list: List of updates 🔧 Info (5 endpoints) • GET /info/api: GET /info/api • GET /info/stat/: Returns overall stagtistics • GET /info/stat/{zone}: Returns statistics for specific zone • GET /info/tld/: Returns overall Tld info • GET /info/tld/{zone}: Returns statistics for specific zone 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Domains-Index API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.