28 templates found
Category:
Author:
Sort:

Simple social: Instagram single image post with Facebook API

Simple Social: Instagram Single Image Post with Facebook API Who is this workflow for? This workflow is designed for businesses, social media managers, content creators, and developers who need to automate the process of posting single images to Instagram using the Facebook API. It is ideal for anyone looking to streamline their social media posting process, saving time and ensuring consistent content delivery. Use Case / Problem Solved Manually posting images and captions on Instagram can be time-consuming, especially for businesses and content creators managing multiple accounts. This workflow automates the process from image preparation to publishing, reducing manual effort and increasing efficiency. What this workflow does Trigger Initialization: The workflow starts with a manual trigger that can be adapted to other triggers (e.g., HTTP webhook or schedule). Set Parameters: The workflow includes a node that sets essential parameters, such as the image URL, Instagram business account ID, and caption. Prepare Instagram Media: A node prepares the media for upload using the Facebook API, sending the image and caption for pre-publication processing. Check Media Upload Status: The workflow verifies if the media preparation is complete. Conditional Check: If the media preparation is successful, the workflow proceeds to publish; otherwise, it triggers an error-handling path. Publish Media: The media is published on Instagram if the conditions are met. Post-Publish Check: The workflow checks the status after publication. Conditional Check for Publication: If the publication status is "PUBLISHED," it triggers a success path; otherwise, it triggers a failure handling. Email Notifications: The workflow sends email notifications to indicate successful or unsuccessful outcomes. Setup Here is a quick video in italian language with sub eng(https://youtu.be/obWJFJvg_6g) Add API Credentials: Ensure that valid Facebook API credentials are added and configured for use. Permissions Required: Ensure your app has the necessary permissions (adsmanagement, businessmanagement, instagrambasic, instagramcontentpublish, pagesread_engagement). App review may be required for external user access. Node Configuration: Customize the Set Instagram Parameters node to specify the image URL, caption, and Instagram business account ID. Trigger Adaptation: Adapt the initial trigger if needed to fit your workflow's requirements (e.g., schedule, webhook). How to customize this workflow Change the Image URL and Caption: Modify the Set Instagram Parameters node to change the image and caption. Trigger Customization: Replace the manual trigger with other triggers like a webhook to automate posting based on external events. Notifications: Adjust the email nodes to send customized messages or trigger other workflows based on the outcome. Limitations Image Format: Only JPEG images are supported. Extended JPEG formats such as MPO and JPS are not compatible. Unsupported Tags: Shopping tags, branded content tags, and filters are not supported. Instagram TV: Publishing to Instagram TV is not supported. Rate Limit: Instagram accounts are limited to 50 API-published posts within a rolling 24-hour period. Carousels count as a single post. Check usage with GET /{ig-user-id}/contentpublishinglimit. Example Usage Imagine managing a business account that needs consistent posts. You can schedule this workflow or trigger it manually to automatically post images with captions at the right time, ensuring that your audience stays engaged without manual posting efforts.

Edoardo GuzziBy Edoardo Guzzi
28363

Sync Google Sheets data with MySQL

This workflow performs several data integration and synchronization tasks between Google Sheets and a MySQL database. Here is a step-by-step description of what this workflow does: Manual Trigger: The workflow starts when the user clicks "Execute Workflow." Schedule Trigger: This node schedules the workflow to run at specific intervals on weekdays (Monday to Friday) between 6 AM and 10 PM. It ensures regular data synchronization. Google Sheet Data: This node connects to a specific Google Sheets document and retrieves data from the "Form Responses 1" sheet, filtering by the "DB Status" column. SQL Get inquiries from Google: This node retrieves data from a MySQL database table named "ConcertInquiries" where the "source_name" is "GoogleForm." Rename GSheet variables: This node renames the columns retrieved from Google Sheets and transforms the data into a format suitable for MySQL, assigning a value for "source_name" as "GoogleForm." Compare Datasets: This node compares the data retrieved from Google Sheets and the MySQL database based on timestamp and source_name fields. It identifies changes and updates. No reply too long?: This node checks if there has been no reply within the last four hours, using the "timestamp" field from the Google Sheets data. DB Status assigned?: This node checks if the "DB Status" field is not empty in the compared dataset. Update GSheet status: If conditions are met in the previous nodes, this node updates the "DB Status" field in Google Sheets with the corresponding value from the MySQL dataset. DB Status in sync?: This node checks if the "source_name" field in Google Sheets is not empty. Sync MySQL data: If conditions are met in the previous nodes, this node updates the "source_name" field in the MySQL database to "GoogleFormSync." Send Notifications: If conditions are met in the "No reply too long?" node, this node sends notifications or performs actions as needed. Sticky Notes: These nodes provide additional information and documentation links for users.

n8n TeamBy n8n Team
9231

🎦🚀 YouTube video comment analysis agent

🎦🚀 YouTube Video Comment Analysis Agent This n8n workflow is designed to help YouTube creators analyze video details and comments to generate a comprehensive and actionable report. The workflow provides insights into video performance, audience engagement, and viewer feedback, helping creators identify trends, interests, and opportunities for future content creation. --- ✨ Key Features Video Performance Analysis: Extracts metrics like views, likes, and comments to evaluate the video's success. Comment Sentiment Analysis: Determines the tone of comments (positive, neutral, or negative) to understand audience sentiment. Recurring Themes Detection: Identifies common topics or questions in comments to highlight viewer interests. Engagement Drivers: Pinpoints what aspects of the video resonated most with viewers. Actionable Recommendations: Offers strategies for creating follow-up content or improving future videos. Keyword Suggestions: Extracts frequently mentioned terms for better discoverability on YouTube. Collaboration Opportunities: Suggests potential partnerships based on viewer feedback or related channels. --- 🛠️ How to Use Set Up Workflow Variables: Add your GOOGLEAPIKEY and the VIDEO_ID of the YouTube video you want to analyze in the "Workflow Variables" node. Ensure your Google API key has access to the YouTube Data API. Run the Workflow: Trigger the workflow manually or through another workflow using the "Execute Workflow Trigger" node. The workflow will fetch video details and comments using pagination to ensure all data is captured. Generate Insights: The workflow processes video details and comments to create a detailed report with actionable insights. Outputs include sentiment analysis, engagement drivers, content opportunities, and audience profiling. View or Share Results: The report is converted into Markdown and can be emailed via Gmail or saved to Google Drive as a document. --- 🌟 Value from This Workflow Gain a deeper understanding of your audience's preferences and feedback. Identify trends and engagement drivers to replicate success in future videos. Discover new content opportunities based on viewer questions and suggestions. Improve discoverability by leveraging keyword suggestions extracted from comments. Build stronger connections with your audience by addressing their needs effectively.

Joseph LePageBy Joseph LePage
4860

Send RSS feed data to webhook

Filters articles based on keywords, checks against MongoDB for unique links, then sends results to different webhooks

daveBy dave
3968

Get all members of a Discord server with a specific role

Use Case This workflow retrieves all members of a Discord server or guild who have a specific role. Due to limitations in the Discord API, it only returns a limited number of users per call. To overcome this, the workflow uses Google Sheets to track which user we received last to return all Members (of a certain role) from a Discord server in batches of 100 members. Setup Add your Google Sheets and Discord credentials. Create a Google Sheets document that contains ID as a column. We're using this to remember which member we received last. Edit the fields in the setup node Setup: Edit this to get started. You can read up on how to get the Discord IDs via this link. Link to your Discord server in the Discord nodes Activate the workflow Call the production webhook URL in your browser Requirements Admin rights in the Discord server and access to the developer portal of discord Google Sheets Minimum n8n version 1.28.0 Potential Use cases Writing a direct message to all members of a certain role Analysing user growth on Discord regularly Analysing role distributions on Discord regularly Saving new members in a Discord ... Keywords Discord API, Getting all members from Discord via API, Google Sheets and Discord automation, How to get all Discord members via API

Niklas HatjeBy Niklas Hatje
3610

YouTube advanced RSS generator with Telegram formation

[](https://youtu.be/EtzJmrmCiUY) Overview The [n8n] YouTube Channel Advanced RSS Feeds Generator workflow facilitates the generation of various RSS feed formats for YouTube channels without requiring API access or administrative permissions. It utilizes third-party services to extract data, making it extremely user-friendly and accessible. Key Use Cases and Benefits Content Aggregation: Easily gather and syndicate content from any public YouTube channel. No API Key Required: Avoid the complexities and limitations of Google's API. Multiple Formats: Supports ATOM, JSON, MRSS, Plaintext, Sfeed, and direct YouTube XML feeds. Flexibility: Input can be a YouTube channel or video URL, ID, or username. Services/APIs Utilized This workflow integrates with: commentpicker.com: For retrieving YouTube channel IDs. rss-bridge.org: To generate various RSS formats. Configuration Instructions Start the Workflow: Activate the workflow in your n8n instance. Input Details: Enter the YouTube channel or video URL, ID, or username via the provided form trigger. Run the Workflow: Execute the workflow to receive links to 13 different RSS feeds, including community and video content feeds. Screenshots Additional Notes Customization: You can modify the RSS feed formats or integrate additional services as needed. Support and Contributions For support, questions, or contributions, please visit the n8n community forum or the GitHub repository. We welcome contributions from the community!

NskhaBy Nskha
2847

Indeed job scraper with AI filtering & company research using Apify and Tavily

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs

Adrian BentBy Adrian Bent
2436

Turn any PDF into a clean Google Doc with Mistral OCR

Upload a PDF and instantly get a neatly formatted Google Doc with all the readable text—no manual copy-paste, no messy line breaks. What this workflow does Accepts PDF uploads via a public form Sends the file to Mistral Cloud for high-accuracy OCR Detects and merges page images with their extracted text Cleans headers, footers, broken lines, and noise Creates a new Google Doc in your chosen Drive folder Writes the polished markdown text into the document What you need Mistral Cloud API key with OCR access Google Docs & Drive credentials connected in n8n Drive folder ID for new documents A PDF file to process (up to 100 MB) Setup Import the workflow into n8n and activate credentials. In Trigger • Form Submission, copy the webhook URL and share it or embed it. In Create • Google Doc, replace the default folder ID with yours. Fill out Mistral API key under Mistral Cloud API credentials. Save and activate the workflow. Visit the form, upload a PDF, name your future doc, and submit. Open Drive to view your newly generated, clean Google Doc. Example use cases Convert annual reports into editable text for analysis. Extract readable content from scan-only invoices for bookkeeping. Turn magazine PDFs into draft blog posts. Digitize lecture handouts for quick search and annotation. Convert image-heavy landing pages / advertorials into editable text for AI to analyze structure and content.

HunyaoBy Hunyao
1651

Multimodal telegram bot with voice, image & video analysis using Claude & Gemini

What it's for: This is a base template for anyone trying to develop a telegram AI Agent. This base allows for multiple inputs (Voice, Picture, Video, and Text inputs) to be processed by an AI model of their choosing to a get a User started. From here, the User may connect any tools that they see fit to the AI Agent for their n8n workflows. How it works: Input: Telegram message to a bot chat n8n Processing: Switch node determines the type: Voice Message Picture Message Video Message Text Message (Currently uses OpenAI and Gemini to analyze Voice/Photo/Video content but feel free to change these nodes with other models) AI Agent Proccessing: LLM of your choosing examines message and based on system prompt, generates an output Output: AI Output is sent back in telegram Message How to use: Create your chat bot and generate access token -> Search Bot father in telegram -> Type "/newbot" -> follow instructions and create access token -> Copy access token Create Credentials in n8n -> Open telegram trigger node -> Click create credential -> Paste access token -> Save Create LLM access token (Different per LLM but search your LLM + API in google) -> (will have to create an account with the LLM platform) -> buy credits to use LLM API -> Generate Access token -> Paste token in LLM node Requirements: Telegram Bot Access Token Google Gemini Access Token (For Picture and Video messages) OpenAI Access Token (For Voice messages) LLM Access Token (Your preference for the AI Agent) Customizing this workflow: To personalize the AI Output, adjust the system prompt (give context or directions on the AI's role) Add tools to the AI agent to give it more utility besides a personalied LLM (Example: Calendars, Databases, etc).

Keith UyBy Keith Uy
1355

Ai-powered LinkedIn content generator with OpenAI GPT-4 and DALL-E

This n8n template helps you build a full AI-powered LinkedIn content generator with just a few clicks. Paired with the free WeWeb UI template, it becomes a ready-to-use web app where users can: Add their own OpenAI API key Customize the prompt and define 6 content topics Edit the AI-generated topics Choose when to generate LinkedIn posts, complete with hashtags and an optional image Who This Is For Perfect for marketers, indie hackers, and solopreneurs who want to build their personal brand on LinkedIn while staying in control of what gets posted. --- 🧠 What Makes This Different Unlike most AI agents, you stay fully in control: You define the tone and focus via the prompt. You choose which topics to keep or modify. You decide when to generate a post. You can build on top of this and create your own SaaS product. It’s also modular and extendable—hook it up to your backend, add user login, or feed AI improvements based on user input. --- ⚙️ How It Works Triggering Events: The app includes 3 pre-configured triggers, ready to be hooked into your WeWeb frontend. Just update the webhook URLs after duplicating the n8n workflow. Topic Generation: A call is made to OpenAI (GPT-4) to generate topic ideas based on your prompt. Post Creation: Once topics are approved or edited, GPT-4 writes full posts with suggested hashtags. Image Generation (Optional): If enabled, a DALL·E call generates a relevant image. Everything Stays Local: All data and images are handled locally, no cloud storage setup needed. --- 🧪 Requirements & Setup No fancy infrastructure required. Here’s what helps you get started: Free WeWeb account (recommended) to use the frontend UI template OpenAI account with API access (for GPT-4 and DALL·E) n8n account (self-hosted or cloud) to run the backend workflow The template is completely free to use. Since each user adds their own OpenAI API key, you don't need to worry about usage costs or rate limits on your end. --- 🔧 Want to Go Further? This setup is beginner-friendly, but developers can: Add user accounts Save post history Feed user feedback back into the prompt logic Launch their own branded version as a SaaS

WeWebBy WeWeb
1230

General 3D presentation workflow with Midjourney, GPT-4o-image and Kling APIs

Who is this template for? This workflow creates 360° or 180° spinning videos of high-quality 3D models with PiAPI API. Good for: Designers: Generate inspiration into 3D designs and make them spin to gain concrete details in a efficient way. Online shoppers: Show protential products from all angles in videos and preview overall texture of models. Content Creators (including toy bloggers): Make fun videos of collectible models. 3D beginners: Get simple spinning animations easily and make fun with them in a convenient way. How to customize this workflow to your need? To use this workflow, usually we need four steps: Fill in x-api-key in Mijdourney Generator node and Generate Kling Video node, fill in Header Parameters of GPT-4o Image Generator (e.g., Bearer + your X-API-Key) Enter your model prompt based on your inspiration. Click Test Workflow. Get the video URL in the last node. Use Case The prompt node concludes the main features of creations. An example for users' reference is listed as follow: Input Prompt A blind box character design, in the chibi style, a super cute little girl wearing a white long-sleeved dress and pearl earrings with her head bowed in a prayer pose, facing upwards, wearing an oversized off-white dress with large round pearls on the shoulders, minimalist simple dress with Ruffles, against a beige background, a full-body shot in a three-quarter profile view, with a black, blue, and gray color scheme, soft lighting, 3D rendering, clay material, high detail, in the Pixar style. Clean white skin, brown renaissance braided bun. --ar 1:1 --niji 6 Output Video An Example for your reference. <video src="https://static.piapi.ai/n8n-instruction/general-3d-presentation/example1.mp4" controls /> More Example Results for Reference <video src="https://static.piapi.ai/n8n-instruction/general-3d-presentation/example3.mp4" controls />

PiAPIBy PiAPI
1213

Web crawler: Convert websites to AI-ready markdown in Google Sheets

Transform any website into a structured knowledge repository with this intelligent crawler that extracts hyperlinks from the homepage, intelligently filters images and content pages, and aggregates full Markdown-formatted content—perfect for fueling AI agents or building comprehensive company dossiers without manual effort. 📋 What This Template Does This advanced workflow acts as a lightweight web crawler: it scrapes the homepage to discover all internal links (mimicking a sitemap extraction), deduplicates and validates them, separates image assets from textual pages, then fetches and converts non-image page content to clean Markdown. Results are seamlessly appended to Google Sheets for easy analysis, export, or integration into vector databases. Automatically discovers and processes subpage links from the homepage Filters out duplicates and non-HTTP links for efficient crawling Converts scraped content to Markdown for AI-ready formatting Categorizes and stores images, links, and full content in a single sheet row per site 🔧 Prerequisites Google account with Sheets access for data storage n8n instance (cloud or self-hosted) Basic understanding of URLs and web links 🔑 Required Credentials Google Sheets OAuth2 API Setup Go to console.cloud.google.com → APIs & Services → Credentials Click "Create Credentials" → Select "OAuth client ID" → Choose "Web application" Add authorized redirect URIs: https://your-n8n-instance.com/rest/oauth2-credential/callback (replace with your n8n URL) Download the client ID and secret, then add to n8n as "Google Sheets OAuth2 API" credential type During setup, grant access to Google Sheets scopes (e.g., spreadsheets) and test the connection by listing a sheet ⚙️ Configuration Steps Import the workflow JSON into your n8n instance In the "Set Website" node, update the website_url value to your target site (e.g., https://example.com) Assign your Google Sheets credential to the three "Add ... to Sheet" nodes Update the documentId and sheetName in those nodes to your target spreadsheet ID and sheet name/ID Ensure your sheet has columns: "Website", "Links", "Scraped Content", "Images" Activate the workflow and trigger manually to test scraping 🎯 Use Cases Knowledge base creation: Crawl a company's site to aggregate all content into Sheets, then export to Notion or a vector DB for internal wikis AI agent training: Extract structured Markdown from industry sites to fine-tune LLMs on domain-specific data like legal docs or tech blogs Competitor intelligence: Build dossiers by crawling rival websites, separating assets and text for SEO audits or market analysis Content archiving: Preserve dynamic sites (e.g., news portals) as static knowledge dumps for compliance or historical research ⚠️ Troubleshooting No links extracted: Verify the homepage has <a> tags; test with a simple site like example.com and check HTTP response in executions Sheet update fails: Confirm column names match exactly (case-sensitive) and credential has edit permissions; try a new blank sheet Content truncated: Google Sheets limits cells to ~50k chars—adjust the .slice(0, 50000) in "Add Scraped Content to Sheet" or split into multiple rows Rate limiting errors: Add a "Wait" node after "Scrape Links" with 1-2s delay if the site blocks rapid requests

Daniel NkenchoBy Daniel Nkencho
1090