61 templates found
Category:
Author:
Sort:

Phishing analysis - URLScan.io and VirusTotal

This n8n workflow automates the analysis of email messages received in a Microsoft Outlook inbox to identify indicators of compromise (IOCs), specifically suspicious URLs. It can be triggered manually or scheduled to run daily at midnight. The workflow begins by retrieving up to 100 read email messages from the Outlook inbox. However, there seems to be a configuration issue as it should retrieve unread messages, not read ones. It then marks these messages as read to avoid processing them again in the future. The messages are then split into individual items using the Split In Batches node for sequential processing. For each email, the workflow analyzes its content to find URLs, which are considered potential IOCs. If URLs are found, the workflow proceeds to check these URLs for potential threats using two services, URLScan.io and VirusTotal, in parallel. In the first path, URLScan.io scans each URL, and if there are no errors, the results from URLScan.io and VirusTotal are merged. If there are errors, the workflow waits 1 minute before attempting to retrieve the URLScan results again. The loop then continues for the next email. In the second path, VirusTotal is used to scan the URLs, and the results are retrieved. Finally, the workflow checks if the data field is not empty, filtering out items where no data was found. It then sends a summarized Slack message to report details about the analyzed email, including the subject, sender, date, URLScan report URL, and VirusTotal verdict for URLs that were reported as malicious. Potential issues during setup include configuring the Outlook node to retrieve unread messages, resolving a configuration issue in the VirusTotal node, and handling authentication and API keys for both URLScan.io and VirusTotal nodes. Additionally, proper error handling and testing with various email content types and URLs are essential to ensure the workflow accurately identifies IOCs and reports them to the Slack channel.

n8n TeamBy n8n Team
59336

Send multiple emails in Gmail directly via Google Sheets

Send multiple emails in Gmail directly via Google Sheets [Video for workflow process](https://www.canva.com/design/DAF8VnLBBQA/BKog1CSHs7goAYse3mEzQ/watch?utmcontent=DAF8VnLBBQA&utmcampaign=designshare&utmmedium=link&utm_source=editor ) In today's fast-paced digital world, businesses are constantly seeking ways to streamline their processes and enhance customer engagement. One powerful tool that facilitates these goals is n8n, an automation platform that allows users to create workflows to automate tasks and workflows. Benefits of the Workflow: Efficiency: By automating the process of sending emails to customers based on data from Google Sheets, this n8n workflow significantly reduces manual effort and saves time. Accuracy: The workflow ensures that emails are sent to the right recipients at the right time by filtering items based on specific conditions and the current date. Personalization: Personalized emails can be sent to customers based on the information provided in the Google Sheet, resulting in enhanced customer engagement. Real-time Updates: The workflow updates the Google Sheet with the status of the sent emails, providing real-time insights into the communication process. Consistency: Through automation, this workflow helps maintain consistency in communication with customers, ensuring a seamless experience. Workflow Overview: The workflow begins with the "Google Sheets Trigger" node, which monitors a specified Google Sheet for new row additions. Upon detection of a new row, the workflow progresses to the "Filter Status (Waiting for sending)" node, where items are filtered based on specific conditions. Subsequently, the workflow moves to the "Filter Items by Current Date" node, which filters items based on the current date. Items matching the current date are then processed further. The filtered items are then forwarded to the "Gmail" node, where personalized emails are composed and sent to recipients based on the Google Sheet data. Finally, the workflow updates the Google Sheet using the "Google Sheets" node with the status of the sent emails and other relevant information. Copy this template to get started : Google Sheets Workflow Nodes Documentation: Schedule Trigger Filter Items by Current Date Gmail Google Sheets Filter Status (Waiting for sending) Set data Merge feild Conclusion: In conclusion, this n8n workflow presents a powerful solution for automating email communication processes based on Google Sheets data. By leveraging automation, businesses can enhance their operational efficiency, accuracy, and customer engagement. The seamless integration of nodes in this workflow streamlines the communication process and ensures timely and personalized interactions with customers. As businesses continue to prioritize efficiency and customer satisfaction, n8n workflows offer a versatile and effective means to achieve these objectives.

AlQaisiBy AlQaisi
46535

🤖 AI content generation for auto service 🚘 automate your social media📲!

Who Is This For? 🚘This workflow is designed for auto service / car repair businesses looking to streamline their social media marketing with Ai tools and automation via n8n. Whether you’re a small garage owner, a car repair shop, an automotive specialist or automechanic - this tool helps you maintain an active online presence without spending hours creating content. 💪🏼 Though this template is set up for Auto Service daily content uploads, but the underlying logic is universal. You can easily adapt it for any niche by editing prompts, adding nodes, and creating or uploading a variety of content to any platform. You can use any LLM and generative AI of your choice. Personally, I prefer the smooth and effective results from ChatGPT 4.1 combined with GPT Image 1. But you can generate videos instead of images for your posts as well 😉 What Problem Is This Workflow Solving? 🤦‍♂️ Many auto service businesses struggle with consistently posting engaging content due to time constraints or lack of marketing expertise. 💎This workflow solves that by automating the content creation and posting process, ensuring your social media channels stay fresh and active, ultimately attracting more customers. What This Workflow Does: Generates daily social media posts tailored specifically to the auto service niche using AI. Allows easy customization of post and image prompts. Integrates research links through the Tavily Internet Search tool for relevant content. Supports starting posts based on reference article links via Google Sheets. Automatically publishes posts to your connected social media platforms. Enables scheduled or trigger-based posting for maximum flexibility. How It Works? Easy, actually ☺️ AI creates daily social media content made just for Auto Service. You can simply edit prompts for both posts and images, set up news or articles research links via the Tavily Internet Search tool. You can also start the workflow with a reference article link through Google Sheets. 🎯 Consistently posting relevant and actual niche content helps attract new customers, all while leveraging AI and n8n tools to keep the process seamless and cost-effective. Set Up Steps: I kept it quick and simple for you ✨ If you’re happy with the current LLM and image model configurations, just connect your OpenAI API credentials to enable AI content generation. Link your social media accounts (Facebook, Telegram, X, etc.) for autoposting. Optionally connect Google Sheets if you want to trigger posts based on sheet updates with reference links. Customize prompts as needed to match your brand voice, style and marketing tasks. Choose between: 1) Scheduled automatic generation and posting at the same time as socials algorythms like it. 2) Google Sheets trigger with reference. 3) Manual start. How to Customize This Workflow to Your Needs? Switch AI models and edit prompts to better reflect your specific services or promotions. Add or modify research links in Tavily to keep your posts timely and relevant. Adjust posting schedules to match peak engagement times for your audience. Expand or reduce the number of social platforms integrated depending on your marketing strategy. Use Google Sheets to batch upload ideas or curate specific content topics. After adjusting a few settings, activate the workflow and let it run. 🤖 The system will then automatically publish your content across your selected social platforms — saving you time and effort. 📌 You’ll find more detailed tips and additional ai models for customizing ai generation process inside the workflow via sticky notes. 💬 Need help? For additional guidance, feel free to message me — here’s my profile in the n8n community for direct contact 👈 click!

N8nerBy N8ner
46034

Generate Youtube video metadata (timestamps, tags, description, ...)

For Who? Content Creators Youtube Automation Marketing Team --- How it works? 1 - Enter the ID of the YTB channel to trigger the workflow when a new video is posted 2 - Apify scrape the last YTB video of the channel 3 - Wait until the dataset is completed in Apify and get it 4 - Verify if Metadata are not already generated and generate them with LLM 5 - Format all the data created and update YTB Video 📺 YouTube Video Tutorial: [](https://www.youtube.com/watch?v=HaQPAa6l5bU) --- SETUP Setup Input YTB Chanel : Go to the channel's page on YouTube, and look at the URL of the page. The channel ID is the value that comes after channel/ in the URL. Add it after "?channel_id=" You can also use free tools available to retrieve channel ID. Setup Output YTB Video Update : Connect your YTB account to your n8n instance thanks to the Google Cloud Console. You can find tutorials by typing "youtube api Oauth" on Google. APIs : For the following third-party integrations, replace ==[YOURAPITOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started Youtube : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.youtube/?utmsource=n8napp&utmmedium=nodesettingsmodal-credentiallink&utm_campaign=n8n-nodes-base.youTubetemplates-and-examples --- 👨‍💻 More Workflows : https://n8n.io/creators/nasser/

NasserBy Nasser
31118

WordPress - AI chatbot to enhance user experience - with Supabase and OpenAI

This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2: Handle upserts for WordPress content when edits are made. Workflow 3: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8nwebsiteembedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content Updated content Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8nwebsiteembedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.

DatakiBy Dataki
12661

Voiceflow demo support chatbot

Submission Overview for Voiceflow Demo Workflow View the YouTube video for this workflow here. Who is this for? This workflow is ideal for businesses and developers using Voiceflow to power AI voice chatbots. It benefits teams that want to enhance chatbot functionality through integrations with platforms like Zendesk, Google Calendar, and Airtable. What problem is this workflow solving? The workflow addresses the need for seamless integration of chatbot interactions with backend systems. It automates customer service tasks such as ticket creation, meeting scheduling, and data reporting, reducing manual effort and enhancing efficiency. What does this workflow do? Customer Lookup: Checks the database for existing customers and returns relevant details or a "NOT_FOUND" status. Zendesk Ticket Creation: Automates the creation of support tickets for customer issues. Meeting Scheduling: Integrates with Google Calendar to provide availability and schedule meetings. Transcript Reporting: Aggregates interaction data and sends it to Airtable for analysis by the product team. Setup Configure your Voiceflow chatbot to connect to this workflow via a webhook. Set up the required integrations: Zendesk API: For ticket creation. Google Calendar API: For scheduling. Airtable API: For storing transcripts. Customize the workflow's nodes to match your use case, such as database fields or API endpoints. Deploy the workflow on your n8n instance and test the integrations. How to customize this workflow to your needs Adjust database queries to match your customer data schema. Modify the Zendesk ticket payload to include additional fields or custom formats. Update Google Calendar configurations for different scheduling requirements. Add or remove Airtable fields based on the product team's analysis needs. This template adheres to n8n’s submission guidelines, ensuring clarity, relevance, and broad applicability for users in customer service, product development, and automation.

Angel MenendezBy Angel Menendez
9673

Build your own PostgreSQL MCP server

This n8n demonstrates how to build a simple PostgreSQL MCP server to manage your PostgreSQL database such as HR, Payroll, Sale, Inventory and More! This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/postgres How it works A MCP server trigger is used and connected to 5 tools: 2 postgreSQL and 3 custom workflow. The 2 postgreSQL tools are simple read-only queries and as such, the postgreSQL tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our standard PostgreSQL node to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This PostgreSQL MCP server allows any compatible MCP client to manage a PostgreSQL database by supporting select, create and update operations. You will need to have a database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me check if Alex has an entry in the users table. If not, please help me create a record for her." "What was the top selling product in the last week?" "How many high priority support tickets are still open this morning?" Requirements PostgreSQL for database. This can be an external database such as Supabase or one you can host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!

JimleukBy Jimleuk
8932

Generate and insert data into a Postgres database

This is Workflow 1 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials. Basic knowledge of JavaScript and SQL. Nodes Cron node starts the workflow every minute. Function node generates sensor data (sensor id (preset), a randomly generated value, timestamp, and notification (preset as false) ) Postgres node inserts the data into a Postgres database. You can create the database for this workflow with the following SQL statement: SQL CREATE TABLE n8n (id SERIAL, sensorid VARCHAR, value INT, timestamp TIMESTAMP, notification BOOLEAN);

tanaypantBy tanaypant
6801

Get articles from Hacker News

Companion workflow for Hacker News node docs

amudhanBy amudhan
6541

Automated DHL shipment tracking bot for web forms and email inquiries

This n8n template automates responses to customer inquiries about DHL shipment status, handling requests from both web forms and emails. Use cases Automate Customer Support: Provide 24/7 instant answers to the common "Where is my order?" question without human intervention. Reduce Support Tickets: Decrease the volume of repetitive tracking inquiries by providing customers with immediate, self-service information. Enhance Customer Experience: Offer a consistent and rapid response across multiple channels (your website and email), allowing customers to use their preferred method of contact. Good to know DHL API Key is required: You'll need to register on the DHL Developer Portal to get your API key. This workflow requires Gmail credentials (OAuth2) to monitor incoming emails and send replies. The webhook URL must be configured in your website's contact or tracking form to receive submissions. How it works The workflow is initiated by one of two triggers: a Webhook (from a website form) or a Gmail Trigger (when a new email arrives). A Merge node combines the data from both triggers into a single, unified flow. The "Extract Tracking Number" Code node intelligently parses the tracking number from either the form data or the email body. It also extracts the customer's name and email address. The HTTP Request node sends the extracted tracking number to the DHL API to fetch the latest shipment status. The "Format Response Message" Code node takes the API response and composes a user-friendly message for the customer. It also handles cases where tracking information is not found. An If node checks the original source of the inquiry to determine whether it came from the webhook or email. If the request came from the webhook, a Respond to Webhook node sends the tracking data back as a JSON response. If the request came from an email, the Gmail node sends the formatted message as an email reply to the customer. How to use Configure the Triggers: Webhook Trigger: Copy the Test URL and set it as the action endpoint for your web form. Once you activate the workflow, use the Production URL. Webhook URL: https://your-n8n-instance.com/webhook/dhl-tracking-inquiry Gmail Trigger: Connect your Gmail account using OAuth2 credentials and set the desired filter conditions (e.g., unread emails with a specific subject). Set up the DHL API: Open the "Get DHL Tracking Status" (HTTP Request) node and navigate to the "Headers" tab. Replace YOURDHLAPI_KEY with your actual DHL API key. json { "DHL-API-Key": "YOURDHLAPI_KEY" } Configure the Gmail Send Node: Connect the same Gmail credentials to the "Send Gmail Response" node. Customize options like the replyTo address as needed. Activate the workflow. Requirements A DHL Developer Portal account to obtain an API key. A Gmail account configured with OAuth2 in n8n. Customising this workflow Add More Carriers: Duplicate the HTTP Request node and response formatting logic to support other shipping carriers like FedEx or UPS. Log Inquiries: Add a node to save inquiry details (tracking number, customer email, status) to a Google Sheet or database for analytics. Advanced Error Handling: Implement more robust error handling, such as sending a Slack notification to your support team if the DHL API is down or returns an unexpected error.

Yusuke YamamotoBy Yusuke Yamamoto
3479

Domain extractor

This node is designed to cleanse URLs and extract their domain names efficiently. It effectively handles a wide range of URL formats, including those with unconventional or complex top-level domains (TLDs), such as 'co.uk'. You can also use it to extract the domain from an email. The node will also check if the domain is from a free email provider (gmail.com / outlook.com...etc) or not. How It Works The node analyzes the provided URL, removing any unnecessary elements. It then identifies and extracts the domain name, ensuring compatibility with a diverse array of TLDs. The node utilizes an extensive list of TLDs to guarantee accurate domain extraction for virtually any URL. To view the complete list of supported top-level domains, please visit: TLD List on GitHub How to use it Call this workflow using the "execute workflow" node You can pass either an email variable or a url variable. For email, the node also detect free mail provider such as Yahoo / Google...etc

Lucas PerretBy Lucas Perret
3007

Export WordPress posts to spreadsheet

Export WordPress Posts to Spreadsheet and download .csv to your local machine.

Przemek ChojeckiBy Przemek Chojecki
2726