13 templates found
Category:
Author:
Sort:

Automated resume job matching engine with Bright Data MCP & OpenAI 4o mini

Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders - Building next-gen recruiting products Recruiters & Talent Sourcers - Seeking automated candidate-job fit evaluation Job Boards & Portals - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers - Offering personalized job fit analysis AI Developers - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrapeashtml tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrapeashtml tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data APITOKEN within the Environments textbox above as APITOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints

Ranjan DailataBy Ranjan Dailata
2888

YouTube video content analyzer & summarizer with Gemini AI

This workflow takes two inputs, YouTube video URL (required) and a description of what information to extract from the video. If the description/"what you want" field is left empty, the default prompt will generate a detailed summary and description of the video's contents. However, you can ask for something more specific using this field/input. ++ Don't forget to make the workflow Active and use the production URL from the form node. Benefits Instant Summary Generation - Convert hours of watching YouTube videos to familiar, structured paragraphs and sentences in less than a minute Live Integration - Generate a summary or extract information on the contents of a YouTube video whenever, wherever Virtually Complete Automation - All that needs to be done is to add the video URL and describe what you want to know from the video Presentation - You can ask for a specific structure or tone to better help you understand or study the contents of the video How It Works Smart Form Interface: Simple N8N form captures video URL and description of what's to be extracted Designed for rapid and repeated completion anywhere and anytime Description Check: Uses JavaScript to determine if the description was filled in or left empty If the description field was left empty, the default prompt is, "Please be as descriptive as possible about the contents being spoken of in this video after giving a detailed summary." If the description field is filled, then the filled input will be used to describe what information to extract from the video HTTP Request: We're using Gemini API, specifically the video understanding endpoint We make a post HTTP request passing the video URL and the description of what information to extract Setup Instructions: HTTP Request Setup: Sign up for a Google Cloud account, join the Developer Program and get your Gemini API key Get curl for Gemini Video Understanding API The video understanding relies on the inputs from the form, code and HTTP request node, so correct mapping is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my Gmail: terflix45@gmail.com, and I'll get back to you as soon as I can. Setup Steps: Code Node Setup: The code node is used as a filter to ensure a description prompt is always passed on. Use the JavaScript code below for that effect: // Loop over input items and add a new field called 'myNewField' to the JSON of each one for (const item of $input.all()) { item.json.myNewField = 1; if ($input.first().json['What u want?'].trim() == "") { $input.first().json['What do you want?'] = "Please be as descriptive as possible about the contents being spoken of this video after giving a detailed summary"; } } return $input.all(); // End of Code HTTP Request: To use Gemini Video Understanding, you'll need your Gemini API key Go to https://ai.google.dev/gemini-api/docs/video-understandingyoutube. This link will take you directly to the snippet. Just select REST programming language, copy that curl command, then paste it into the HTTP Request node Replace "Please summarize the video in 3 sentences." with the code node's output, which should either be the default description or the one entered by the user (second output field variable) Replace "https://www.youtube.com/watch?v=9hE5-98ZeCg" with the n8n form node's first output field, which should be the YouTube video URL variable Replace $GEMINIAPIKEY with your API key Redirect: Use n8n form node, page type "Final Ending" to redirect user to the initial n8n form for another analysis or preferred destination

Adrian BentBy Adrian Bent
2161

Gather leads in Google Sheet and Mailchimp

Gather leads into Mailchimp, automate marketing, and sales process.

emmanuelchilaka779By emmanuelchilaka779
1130

Auto-generate social media posts from URLs with AI, Telegram & multi-platform posting

How it works This workflow turns any URL sent to a Telegram bot into ready-to-publish social posts: Trigger: Telegram message (checks if it contains a URL). Fetch & parse: Downloads the page and extracts readable text + title. AI writing: Generates platform-specific copy (Facebook, Instagram, LinkedIn). Image: Creates an AI image and stores it in Supabase Storage. Publish: Posts to Facebook Pages, Instagram Business, LinkedIn. Logging: Updates Google Sheets with post URLs and sends a Telegram confirmation (image + links). Setup Telegram – create a bot, connect via n8n Telegram credentials. OpenAI / Gemini – add API key in n8n Credentials and select it in the AI nodes. Facebook/Instagram (Graph API) – create a credential called facebookGraph with: • accessToken (page-scoped or system user) • pageId (for Facebook Page photos) • igUserId (Instagram Business account ID) • optional fbApiVersion (default v19.0) LinkedIn – connect with OAuth2 in the LinkedIn node (leave as credential). Supabase – credential supabase with url and apiKey. Ensure a bucket exists (default used in the Set node is social-media). Google Sheets – replace YOURGOOGLESHEET_ID and Sheet1. Grant your n8n Google OAuth2 access. Notes • No API keys are stored in the template. Everything runs via n8n Credentials. • You can change bucket name, image size/quality, and AI prompts in the respective nodes. • The confirmation message on Telegram includes direct permalinks to the published posts. Required credentials • Telegram Bot • OpenAI (or Gemini) • Facebook/Instagram Graph • LinkedIn OAuth2 • Supabase (url + apiKey) • Google Sheets OAuth2 Inputs • A Telegram message that contains a URL. Outputs • Social posts published on Facebook, Instagram, LinkedIn. • Row appended/updated in Google Sheets with post URLs and image link. • Telegram confirmation with the generated image + post links.

Karol OtrębaBy Karol Otręba
1095

Get all the entries from Contentful

No description available.

Harshil AgrawalBy Harshil Agrawal
847

English vocabulary bot for Telegram with Gemini & random word API

📄 What this workflow does Every 3 hours, the workflow fetches 3 random English words, asks Gemini to generate a short Vietnamese vocabulary digest (word, Vietnamese meaning, and an example sentence), and sends it to a Telegram chat. Perfect for steady, low-effort vocab exposure in groups. 👤 Who is this for English learners who want a gentle, automated learning cadence. Teachers/tutors who share daily vocab in Telegram groups. Community admins who want lightweight, useful content for members. Anyone who prefers bite-sized language learning on autopilot. ✅ Requirements Gemini API access (configured in n8n). Telegram Bot token + chat ID (the chat you want to receive messages). Internet access to Random Word API (no API key required). n8n instance with outbound HTTPS access. ⚙️ How to set up Add your Gemini credentials in n8n (the Google Gemini/PaLM node). Add your Telegram credentials and set the chatId in the “Send a text message” node. (Optional) Adjust the schedule interval (default: every 3 hours). (Optional) Change the number of words by updating the HTTP Request URL param words=3. (Optional) Edit the Gemini prompt language/content to fit your style (currently Vietnamese output). Run once to test; confirm the message arrives in Telegram. 🔁 How it works Schedule Trigger → runs every 3 hours. HTTP Request → calls random-word-api to get 3 words. Edit Fields (Set) → wraps the API response under word. Aggregate → prepares the word field for the LLM. Message a model (Gemini) → creates a Vietnamese digest: English word, Vietnamese meaning, and example sentence for each word. Send a text message (Telegram) → posts the digest to your specified chat. 💡 About Margin AI Margin AI is an AI-services agency that acts as your AI Service Companion. We design intelligent, human-centric automation solutions—turning your team’s best practices into scalable, automated workflows and tools. Industries like marketing, sales, and operations benefit from our tailored AI consulting, automation tools, chatbot development, and more.

Cong NguyenBy Cong Nguyen
845

Telegram shopping assistant: Amazon product search with Apify & OpenRouter AI

Automate Amazon searches to Telegram with AI-powered scraping This workflow connects Amazon product lookups to Telegram using AI-enhanced scraping and automation. It lets users send a product name to a Telegram bot and instantly receive pricing, discount, and product links — all pulled dynamically from Amazon. Who’s it for Amazon affiliates Telegram shopping groups Product reviewers & resellers Deal-focused communities Anyone wanting fast price checks without browsing How it works Telegram Trigger receives messages from the user. AI Classifier (via OpenRouter & LangChain) detects whether the user is asking for a product. If yes, it sends the query to Apify's Amazon Scraper to fetch real product listings. The scraped data (price, discount, rating, link) is formatted into a Telegram response. If not a product query, an AI fallback response is sent instead. Requirements Telegram Bot Token Apify API Token OpenRouter API Key (or compatible LLM provider)

Parth PansuriyaBy Parth Pansuriya
835

Generate marketing campaign ROI reports with Google Sheets, GPT-4o and Email

This n8n workflow pulls campaign data from Google Sheets, summarizes it using OpenAI, and sends a performance recap via Outlook email. --- ✅ Step 1: Connect Google Sheets In n8n, go to Credentials → click New Credential Select Google Sheets OAuth2 API Log in with your Google account and authorize Use a spreadsheet with: Column names in the first row Data in rows 2–100 Example format: 📄 Sample Marketing Sheet --- ✅ Step 2: Connect OpenAI Go to OpenAI API Keys Make sure you have a payment method set under Billing In n8n, create a new OpenAI API credential Paste your API key and save --- 📬 Need Help? Feel free to contact me if you run into issues: 📧 robert@ynteractive.com 🔗 LinkedIn

Robert BreenBy Robert Breen
671

Declutter Gmail: archive inactive emails with GPT-4 classification

Who is this for? This workflow is for professionals, entrepreneurs, or anyone overwhelmed by a cluttered Gmail inbox. If you want to automatically archive low-priority emails using AI, this is the perfect hands-free solution. What does it solve? Your inbox fills up with old, read emails that no longer need your attention but manually archiving them takes time. This workflow uses AI to scan and intelligently decide whether each email should be archived, needs a reply, or is spam. It helps you: Declutter your Gmail inbox automatically Identify important vs. unimportant emails Save time with smart email triage How it works A scheduled trigger runs the workflow (you set how often). It fetches all read emails older than 45 days from Gmail. Each email is passed to an AI model(GPT-4) that classifies it as: Actionable Archive If the AI recommends archiving, the workflow archives the email from your inbox. All other emails are left untouched so you can review them as needed. How to set up? Connect your Gmail (OAuth2) and OpenAI API credentials. Open the "Schedule Trigger" node and choose how often the workflow should run (e.g., daily, weekly). Optionally adjust the Gmail filter in the “List Old Emails” node to change which emails are targeted. Start the workflow and let AI clean up your inbox automatically. How to customize this workflow to your needs Change the Gmail filter: Edit the query in the Gmail node to include other conditions (e.g., older_than:30d, specific labels, unread only). Update the AI prompt: Modify the prompt in the Function node to detect more nuanced categories like “Meeting Invite” or “Newsletter.” Adjust schedule frequency: Change how often the cleanup runs (e.g., hourly, daily).

Matt Chong | n8n CreatorBy Matt Chong | n8n Creator
403

Product concept to 3D showcase with Claude AI & DALL-E packaging design

This is an advanced n8n workflow for transforming product concepts into 3D showcase videos with AI packaging design and auto-rotation rendering. Workflow Features: 🎯 Core Capabilities: AI Product Concept Generation - Uses Claude Sonnet 4 to analyze product prompts and generate comprehensive 3D specifications Automated Packaging Design - DALL-E 3 generates professional packaging visuals Texture Map Generation - Creates PBR-ready texture maps for realistic materials 3D Scene Script Generation - Produces complete Blender Python scripts with: Product geometry based on shape Professional 3-point lighting (key, fill, rim) 360° rotation animation (8 seconds) Camera setup and render settings Preview Rendering - Generates photorealistic 3D preview images Video Processing - Handles encoding and upload to video hosting services Database Storage - Saves all showcase data for tracking Status Monitoring - Checks render progress with automatic retry logic 📋 Required Setup: API Credentials needed: Anthropic API (for Claude AI) OpenAI API (for DALL-E image generation) Replicate API (optional for additional rendering) Video hosting service (Cloudflare Stream or similar) PostgreSQL database 🔧 How to Use: Import the JSON - Copy the artifact content and import directly into n8n Configure Credentials - Add your API keys in n8n credentials manager Activate Workflow - Enable the webhook trigger Send Request to webhook endpoint: json POST /product-showcase { "productPrompt": "A premium organic energy drink in a sleek aluminum can with nature-inspired graphics" } 📤 Output Includes: Product specifications (dimensions, materials, colors) Packaging design image URL Texture map URLs Downloadable Blender script 3D preview render Video showcase URL Rendering metadata

Oneclick AI SquadBy Oneclick AI Squad
215

Discord AI trading assistant with proper position sizing (metatrader5)

AI Trading Assistant with Metatrader5 and position sizing capabilities Trade or buy/sell forex and xauusd/gold assets with this n8n template. It demonstrates an AI-powered Discord bot that monitors trading commands in a private server channel and automatically executes them on MetaTrader 5, using natural language processing to parse flexible trade syntax and manage signal queues with correct position sizing for lot sizes. Use cases are many: Try creating a private Discord trading room where you can execute MT5 trades with simple messages like "buy EU 1% 30 pip SL 3R TP", building a shared trading server where team members can post signals that auto-execute on everyone's MT5, or developing a trading journal bot that logs and executes your strategy calls in real-time! Good to know Always test on demo accounts first - this bot executes real trades based on Discord messages, so thorough testing is critical Requires OpenAI API key for natural language processing (GPT-4o-mini model) - costs approximately $0.0001-0.0003 per trade command Discord OAuth2 authentication required - you'll need to create a Discord application and configure webhook permissions The workflow polls Discord every 7 seconds by default - adjust the Schedule Trigger interval to balance responsiveness vs. API rate limits Only processes messages from specified users without emoji reactions - prevents duplicate processing and allows filtering by username MT5 Trading Signal Handler workflow must be running - this bot sends orders to the webhook endpoints from the companion workflow Supports both market orders (instant execution) and limit orders (pending at specific price) Uses two-stage parsing: regex-based parser for speed, with LLM fallback for complex or ambiguous commands No message history - only processes the most recent Discord message per polling cycle Built-in signal management commands - users can check pending signals or clear the queue via natural language How it works Scheduled Polling: The Schedule Trigger fires every 7 seconds, setting Discord server ID and channel ID credentials, then fetching the most recent message from the specified channel Message Filtering: The workflow filters messages to only process those from the specified username (e.g., "elijahfx") that haven't been reacted to yet, preventing duplicate processing Processing Indicator: Once a valid message is detected, the bot reacts with a 🔄 emoji to show it's working, providing immediate user feedback AI Classification: The first LLM (GPT-4o-mini) classifies the message intent into six categories: tradeexecution, tradeinquiry, signalquery, signalclear, helprequest, or offtopic Regex Parsing: For trade execution messages, a JavaScript Code node attempts to extract parameters using regex patterns, detecting order type (market vs limit), direction (buy/sell), asset shortcuts (EU→EURUSD, GU→GBPUSD), risk percentage, stop loss, and take profit LLM Fallback: If the regex parser can't confidently extract all required parameters (needsllmreview: true), a second LLM validates completeness and extracts missing fields, ensuring flexible syntax handling Parameter Validation: The workflow checks if all required parameters are present - market orders need direction, asset, risk%, and SL pips; limit orders additionally require entry price Order Type Routing: A Switch node routes complete orders to the appropriate HTTP endpoint - market orders to /webhook/mt5-market-order, limit orders to /webhook/mt5-limit-order MT5 Execution: HTTP Request nodes send the parsed parameters to your local MT5 webhook endpoints, which execute the trades via the companion MT5 Trading Signal Handler workflow Response Handling: The workflow receives confirmation from MT5, then posts a Discord message confirming successful execution or reporting errors with details Signal Management: For signalquery commands, the bot fetches and displays pending signals from the MT5 queue; for signalclear commands, it purges all pending signals via HTTP request Help System: When users request help, the bot posts comprehensive instructions covering trade syntax, asset shortcuts, required parameters, and example commands Query Handling: Non-trade questions about trading are answered directly by the AI, providing a conversational assistant experience How to use Create a Discord bot application at discord.com/developers and obtain OAuth2 credentials with message read/write and reaction permissions Configure Discord OAuth2 credentials in all Discord nodes (use the same credential across all nodes for consistency) Set your OpenAI API key in the three "4o mini" nodes for LLM processing Update the "set credentials and params" node with your Discord server ID and channel ID (right-click in Discord with Developer Mode enabled to copy IDs) Ensure the MT5 Trading Signal Handler workflow is imported, activated, and running on the same n8n instance (default: localhost:5678) Activate this workflow and verify the Schedule Trigger is set to your desired polling interval (7 seconds default) In the "only get user's message that has no reacts" Filter node, change the username from "elijahfx" to your Discord username Post "help" in your Discord channel to test connectivity and receive the command reference guide Try a test market order: buy EU 1% 30 pip SL 3R TP and watch for the 🔄 reaction and confirmation message Test a limit order: buy limit EU at 1.0850 1% risk 30 pip SL 2R TP Query pending signals: show me pending signals or any open signals? Clear the queue: clear all pending signals or cancel all signals Requirements n8n instance running locally or on a private VPS (cloud-hosted n8n may have network restrictions for localhost webhooks) Discord bot application with OAuth2 credentials and appropriate permissions (read messages, send messages, add reactions) OpenAI API account with access to GPT-4o-mini model (approximately $0.15 per 1 million input tokens) MT5 Trading Signal Handler workflow running on the same n8n instance (provides webhook endpoints) Discord server where you have admin permissions to add the bot MetaTrader 5 terminal with the custom EA running and polling the signal queue Basic understanding of Discord bot setup, n8n workflows, and trading concepts Customising this workflow Add multiple user support by removing the username filter or converting it to a whitelist of approved traders Implement authentication by checking for specific Discord roles before processing trade commands (prevents unauthorized trading) Add trade logging by inserting a database node (PostgreSQL, Airtable) after successful execution to maintain a trade journal Create Discord embeds instead of plain text responses using Discord's embed formatting for richer, more professional replies Add risk limits in a Code node before HTTP requests to cap maximum position sizes or enforce daily loss limits Build a dashboard by connecting to Google Sheets or Notion to track all executed trades, win rates, and performance metrics Implement approval workflows where limit orders post to a separate channel for review before execution Add TradingView alerts as an alternative input by creating a webhook trigger that accepts TradingView JSON payloads Create strategy templates where users can save and recall complete trade setups with a single command like "execute strategy A" Enable multi-account trading by routing orders to different MT5 instances based on user roles or command prefixes Try a popular use-case such as building a signal subscription service where premium Discord members get auto-execution while free members only see the signals! Purchasing this N8N workflow comes with the Metatrader5 and N8N Integration for Forex and Gold Trading via Webhooks Workflow too so it is sold together and vice versa as well along with the MQL code for the ExpertAdvisor listener all for the price of 120 usd total Questions? If you have questions or need help with this workflow, feel free to reach out: elijahmamuri@gmail.com elijahfxtrading@gmail.com --- Important Disclaimer: This workflow is provided for educational purposes and demo account testing only. It demonstrates how to build Discord bots with AI-powered natural language processing and webhook integrations. Always test thoroughly on demo accounts, implement proper authentication and security measures, and understand that automated trading involves substantial risk. The bot executes trades based on Discord messages - ensure only authorized users have access. No warranties, financial advice, or guarantees are provided. See the full disclaimer in the workflow for complete terms.

Cj Elijah GarayBy Cj Elijah Garay
113

Multi-channel customer sentiment tracker with real-time analytics and alerting

How It Works Scheduled processes retrieve customer feedback from multiple channels. The system performs sentiment analysis to classify tone, then uses OpenAI models to extract themes, topics, and urgency indicators. All processed results are stored in a centralized database for trend tracking. Automated rules identify high-risk or negative sentiment items and trigger alerts to the relevant teams. Dashboards and workflow automation then visualize insights and support follow-up actions. Setup Instructions Data Sources: Connect social media APIs, survey tools, and customer support platforms. AI Analysis: Configure the OpenAI API with sentiment and theme-extraction prompts. Database: Set up a feedback storage schema in your utility database. Alerts: Configure email notifications and CRM triggers for priority issues. Dashboards: Link your analytics and reporting tools for real-time insights. Prerequisites Social media/survey API credentials; OpenAI API key; database access; CRM system credentials; email notification setup Use Cases Customer sentiment tracking; product feedback aggregation; support ticket prioritization; brand monitoring; trend identification Customization Adjust sentiment thresholds; add new feedback sources; modify categorization rules Benefits Reduces analysis time 85%; captures actionable insights; enables rapid response to issues

Cheng Siong ChinBy Cheng Siong Chin
111