Sync Google Sheets data with MySQL
This workflow performs several data integration and synchronization tasks between Google Sheets and a MySQL database. Here is a step-by-step description of what this workflow does: Manual Trigger: The workflow starts when the user clicks "Execute Workflow." Schedule Trigger: This node schedules the workflow to run at specific intervals on weekdays (Monday to Friday) between 6 AM and 10 PM. It ensures regular data synchronization. Google Sheet Data: This node connects to a specific Google Sheets document and retrieves data from the "Form Responses 1" sheet, filtering by the "DB Status" column. SQL Get inquiries from Google: This node retrieves data from a MySQL database table named "ConcertInquiries" where the "source_name" is "GoogleForm." Rename GSheet variables: This node renames the columns retrieved from Google Sheets and transforms the data into a format suitable for MySQL, assigning a value for "source_name" as "GoogleForm." Compare Datasets: This node compares the data retrieved from Google Sheets and the MySQL database based on timestamp and source_name fields. It identifies changes and updates. No reply too long?: This node checks if there has been no reply within the last four hours, using the "timestamp" field from the Google Sheets data. DB Status assigned?: This node checks if the "DB Status" field is not empty in the compared dataset. Update GSheet status: If conditions are met in the previous nodes, this node updates the "DB Status" field in Google Sheets with the corresponding value from the MySQL dataset. DB Status in sync?: This node checks if the "source_name" field in Google Sheets is not empty. Sync MySQL data: If conditions are met in the previous nodes, this node updates the "source_name" field in the MySQL database to "GoogleFormSync." Send Notifications: If conditions are met in the "No reply too long?" node, this node sends notifications or performs actions as needed. Sticky Notes: These nodes provide additional information and documentation links for users.
Split out binary data
This workflows helps with processing binary data. You'll often have binary objects with keys such as attachment0, attachment1, attachment_2, etc. attached to your items, for example when reading an incoming email. This binary data is hard to process because it's not an array you can simply loop through. This workflow solves this problem by providing a Function node that takes all incoming items and all their binary data and then returning a single item for each file with a data key containing your binary file. Incoming binary data: Processed binary data:
Get all members of a Discord server with a specific role
Use Case This workflow retrieves all members of a Discord server or guild who have a specific role. Due to limitations in the Discord API, it only returns a limited number of users per call. To overcome this, the workflow uses Google Sheets to track which user we received last to return all Members (of a certain role) from a Discord server in batches of 100 members. Setup Add your Google Sheets and Discord credentials. Create a Google Sheets document that contains ID as a column. We're using this to remember which member we received last. Edit the fields in the setup node Setup: Edit this to get started. You can read up on how to get the Discord IDs via this link. Link to your Discord server in the Discord nodes Activate the workflow Call the production webhook URL in your browser Requirements Admin rights in the Discord server and access to the developer portal of discord Google Sheets Minimum n8n version 1.28.0 Potential Use cases Writing a direct message to all members of a certain role Analysing user growth on Discord regularly Analysing role distributions on Discord regularly Saving new members in a Discord ... Keywords Discord API, Getting all members from Discord via API, Google Sheets and Discord automation, How to get all Discord members via API
Indeed job scraper with AI filtering & company research using Apify and Tavily
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
Convert natural language to video JSON prompts with GPT and Gemini for Veo 3
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input (Chat Trigger) OpenAI (Azure OpenAI GPT model) Alternative (OpenRouter API) Prompt converter (Langchain chain LLM for JSON conversion) JSON parser (structured output extraction) Generate a video (Google Gemini video generation) --- Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
Qualify replies from Pipedrive persons with AI
About the workflow The workflow reads every reply that is received from a cold email campaign and qualifies if the lead is interested in a meeting. If the lead is interested, a deal is made in pipedrive. You can add as many email inboxes as you need! Setup: Add credentials to the Gmail, OpenAI and Pipedrive Nodes. Add a in_campaign field in Pipedrive for persons. In Pipedrive click on your credentials at the top right, go to company settings > Data fields > Person and click on add custom field. Single option [TRUE/FALSE]. If you have only one email inbox, you can delete one of the Gmail nodes. If you have more than two email inboxes, you can duplicate a Gmail node as many times as you like. Just connect it to the Get email node, and you are good to go! In the Gmail inbox nodes, select Inbox under label names and uncheck Simplify.
Parse PDF, DOCX & images with Mistral OCR via Google Drive with Slack alerts
Use cases Monitor Google Drive folder, parsing PDF, DOCX and image file into a destination folder, ready for further processing (e.g. RAG ingestion, translation, etc.) Keep processing log in Google Sheet and send Slack notifications. How it works Trigger: Watch Google Drive folder for new and updated files. Create a uniquely named destination folder, copying the input file. Parse the file using Mistral Document, extracting content and handling non-OCRable images separately. Save the data returned by Mistral Document into the destination Google Drive folder (raw JSON file, Markdown files, and images) for further processing. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (14 nodes total) to use the credentials Mistral node: Create Mistral Cloud API credentials. Read more about Mistral Cloud Credentials. Update the OCR Document node to use the Mistral Cloud credentials. Slack nodes: Create Slack OAuth2 credentials. Read more about Slack OAuth2 credentials Update the two Slack nodes: Send Success Message and Send Error Message: Set the credentials Select the channel where you want to send the notifications (channels can be different for success and errors). Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration. Ensure the spreadsheet can be accessed as Editor by the account used by the Google Credentials above. Create a directory for input files and a directory for output folders/files. Ensure the directories can be accessed by the account used by the Google Credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes. Requirements Google account with Google API access Mistral Cloud account access to Mistral API key. Slack account with access to Slack client ID and secret ID. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone building a data pipeline ingesting files to be OCRed for further processing. 🔒 Security All credentials are stored as n8n credentials. The only information stored in this workflow that could be considered sensitive are the Google Drive Directory and Sheet IDs. These directories and the spreadsheet should be secured according to your needs. Need Help? Reach out on LinkedIn or Ask in the Forum!
[eBay] negotiation API MCP server
Complete MCP server exposing 2 Negotiation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Negotiation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Negotiation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 FindEligibleItems (1 endpoints) • GET /findeligibleitems: Find Eligible Listings 🔧 SendOfferToInterestedBuyers (1 endpoints) • POST /sendoffertointerestedbuyers: Send Discount Offer 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Negotiation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
Transfer workflows with credentials & sub-workflow management between n8n instances
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Export Workflows Between n8n Instances Copy workflows between n8n instances — with optional credential export and automatic sub-workflow adjustments. 🧠 How it Works This workflow copies a selected workflow from a SOURCE n8n server to a TARGET server and guides you through safe checks: Name conflict check: If a workflow with the same name exists on the target the export is stopped. Sub-workflows: Detects calls to sub-workflows. If all sub-workflows exist on the target (same names), references are auto-updated and the export continues. If any are missing, the form shows what’s missing and lets you cancel or proceed anyway. Credentials: Detects nodes using credentials and lets you export those credentials along with the workflow. The workflow can only apply credential corrections for the credentials that you choose to export with it. At the end, the form lists which credentials were successfully exported. 💡 For in-depth behavior and edge cases, see the Notes inside the workflow (Setup, How It Works, and Credential Issues). 🚀 How to Use Run this workflow on your SOURCE server. Follow the step-by-step form: pick the workflow to export, choose whether to include credentials, and review sub-workflow checks. Done. ⚙️ Setup Create an n8n API key on both servers (SOURCE and TARGET). On the SOURCE server, create two n8n API credentials in n8n: one for SOURCE and one for TARGET (using the respective base URL and key). Configure the nodes in this workflow with these two credentials. Detailed step-by-step instructions are available in the workflow notes. ✅ Once configured, you’ll be ready to migrate workflows between servers in just a few clicks.
Scrape Shopify store data with RapidAPI and save to Google Sheets
An automated workflow that scrapes Shopify store information and product data using the Shopify Scraper API from RapidAPI, triggered by a user submitting a website URL, then logs data into Google Sheets for easy access and analysis. --- Node-by-Node Explanation On form submission Triggers when a user submits a Shopify store website URL. Store Info Scrap Request Sends a POST request to shopify-scraper4.p.rapidapi.com/shopinfo.php to fetch store metadata (name, location, domain, etc.). Products Scarp Request Sends a POST request to shopify-scraper4.p.rapidapi.com/products.php to retrieve detailed product data (titles, prices, tags, etc.). Append Store Info Google Sheets Appends store metadata into the "Shop Info" sheet in Google Sheets. Append Products Data In Google Sheets Appends product data into the "Products" sheet in Google Sheets. --- Use Case Ideal for businesses or analysts who want to quickly gather Shopify store insights and product catalogs without manual data collection, enabling data-driven decision-making or competitive analysis. --- Benefits Automates Shopify data extraction with the powerful Shopify Scraper API on RapidAPI. Saves time by collecting and organizing data automatically into Google Sheets. Easily scalable and adaptable for multiple Shopify stores. --- 🔑 How to Get API Key from RapidAPI Shopify Scraper Follow these steps to get your API key and start using it in your workflow: Visit the API Page 👉 Click here to open Shopify Scraper API on RapidAPI Log in or Sign Up Use your Google, GitHub, or email account to sign in. If you're new, complete a quick sign-up. Subscribe to a Pricing Plan Go to the Pricing tab on the API page. Select a plan (free or paid, depending on your needs). Click Subscribe. Access Your API Key Navigate to the Endpoints tab. Look for the X-RapidAPI-Key under Request Headers. Copy the value shown — this is your API key. Use the Key in Your Workflow In your n8n workflow (HTTP Request node), replace: text "x-rapidapi-key": "your key" with: text "x-rapidapi-key": "YOURACTUALAPI_KEY"
Automated resume tailoring with Telegram Bot, LinkedIn & OpenRouter AI
This n8n workflow lets you effortlessly tailor your resume for any job using Telegram and LinkedIn. Simply send a LinkedIn job URL or paste a job description to the Telegram bot, and the workflow will: Extract the job information (using optional proxy if needed) Fetch your resume in JSON Resume format (hosted on GitHub Gist or elsewhere) Use an OpenRouter-powered LLM agent to automatically adapt your resume to match the job requirements Generate both HTML and PDF versions of your tailored resume Return the PDF file and shareable download links directly in Telegram The workflow is open-source and designed with privacy in mind. You can host the backend yourself to keep your data entirely under your control. It requires a Telegram Bot, a public JSON Resume, and an OpenRouter account. Proxy support is available for LinkedIn scraping. Perfect for anyone looking to quickly customize their resume for multiple roles with minimal manual effort!
Perform, get scans 🛠️ urlscan.io tool MCP server 💪 all 3 operations
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all urlscan.io Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every urlscan.io Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n urlscan.io Tool tool with full error handling 📋 Available Operations (3 total) Every possible urlscan.io Tool operation is included: 🔧 Scan (3 operations) • Get a scan • Get many scans • Perform a scan 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native urlscan.io Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every urlscan.io Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.