10 templates found
Category:
Author:
Sort:

Easy image captioning with Gemini 1.5 Pro

This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/fauto,qauto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.

JimleukBy Jimleuk
13894

Translate & repost Twitter threads in multiple languages with OpenAI

Twitter Thread (Flood) Translator & Poster What it does Thread Extraction: Automatically detects and extracts all tweets from a provided Twitter thread (flood) link. Translation: Translates each extracted tweet into your target language using OpenAI. Rewriting: Rewrites each translated tweet to maintain the original meaning while improving clarity or style. Automated Posting: Posts the rewritten tweets as a new thread on Twitter using twitterapi.io, preserving the original thread structure. How it works Accepts a Twitter thread (flood) link as input. Extracts all tweets from the thread in their original order. Each tweet is sent to OpenAI for translation into your desired language. The translated tweets are then rewritten for clarity and natural flow, while keeping the original meaning intact. The processed tweets are automatically posted as a new thread on your Twitter account via twitterapi.io. Setup Steps Create a Notion Database: Set up a database page in Notion to store and manage your Twitter links and workflow data. Configure Notion Integration: Add the created database page ID to the Notion nodes in your workflow. Set Twitter API Credentials: Add your twitterapi.io API key to the relevant nodes. Add Twitter Account Details: Enter your Twitter account username/email and password for authentication. Set Up OpenAI Credentials: Provide your OpenAI API credentials to enable translation and rewriting. Subworkflow Integration: Create a separate workflow for subworkflow logic and call it using the Execute Workflow node for modular automation. Set Desired Language & Thread Link: Change the target language and Twitter thread (flood) link directly in the Manual Trigger node to customize each run. Benefits Ultra Low Cost: Total cost for a 15-tweet thread (flood) is just $0.016 USD ($0.015 for twitterapi.io + $0.001 for OpenAI API). (Actual cost may vary depending on the density of tweets in the thread.) End-to-End Automation: Go from thread extraction to translation, rewriting, and reposting-all in one workflow. Multilingual Support: Effortlessly translate and republish Twitter threads in any supported language. > Note: Detailed configuration instructions and node explanations are included as sticky notes within the workflow canvas. --- Ideal for: Content creators looking to reach new audiences by translating and republishing Twitter threads Social media managers automating multilingual content workflows Anyone wanting to streamline the process of thread extraction, translation, and posting --- Notes This workflow is not able to post images or videos to Twitter-it handles text-only threads.

enes cingozBy enes cingoz
5275

Validate website of new companies in HubSpot

This workflow uses a Hubspot Trigger to check for new companies. It then checks the companies website exists using the HTTP node. If it doesn't, a message is sent to Slack. To configure this workflow you will need to set the credentials for the Hubspot and Slack Nodes. You will also need to select the Slack channel to use for sending the message.

JonathanBy Jonathan
1474

LinkedIn lead generation: Auto DM system with comment triggers using Unipile & NocoDB

Short Description This LinkedIn automation workflow monitors post comments for specific trigger words and automatically sends direct messages with lead magnets to engaged users. The system checks connection status, handles non-connected users with connection requests, and prevents duplicate outreach by tracking all interactions in a database. Key Features Comment Monitoring: Scans LinkedIn post comments for customizable trigger words Connection Status Check: Determines if users are 1st-degree connections Automated DMs: Sends personalized messages with lead magnet links to connected users Connection Requests: Asks non-connected users to connect via comment replies Duplicate Prevention: Tracks interactions in NocoDB to avoid repeat messages Message Rotation: Uses different comment reply variations for authenticity Batch Processing: Handles multiple comments with built-in delays Who This Workflow Is For Content creators looking to convert post engagement into leads Coaches and consultants sharing valuable LinkedIn content Anyone wanting to automate lead capture from LinkedIn posts How It Works Setup: Configure post ID, trigger word, and lead magnet link via form Comment Extraction: Retrieves all comments from the specified post using Unipile Trigger Detection: Filters comments containing the specified trigger word Connection Check: Determines if commenters are 1st-degree connections Smart Routing: Connected users receive DMs, others get connection requests Database Logging: Records all interactions to prevent duplicates Setup Requirements Required Credentials Unipile API Key: For LinkedIn API access NocoDB API Token: For database tracking Database Structure Table: leads linkedin_id: LinkedIn user ID name: User's full name headline: LinkedIn headline url: Profile URL date: Interaction date posts_id: Post reference connection_status: Network distance dm_status: Interaction type (sent/connection request) Customization Options Message Templates: Modify DM and connection request messages Trigger Words: Change the words that activate the workflow Timing: Adjust delays between messages (8-12 seconds default) Reply Variations: Add more comment reply options for authenticity Installation Instructions Import the workflow into your n8n instance Set up NocoDB database with required table structure Configure Unipile and NocoDB credentials Set environment variables for Unipile root URL and LinkedIn account ID Test with a sample post before full use

Alexandra SpalatoBy Alexandra Spalato
1361

Generate company stories from LinkedIn with Bright Data & Google Gemini

Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for:​ Marketing Professionals: Seeking to generate compelling company narratives for campaigns.​ Sales Teams: Aiming to understand potential clients through summarized company insights.​ Content Creators: Looking to craft stories or articles based on company data.​ Recruiters: Interested in obtaining concise overviews of companies for talent acquisition strategies.​ What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring:​ Efficiency: Quick extraction and summarization of company data.​ Consistency: Standardized summaries for uniformity across use cases.​ Scalability: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps:​ Input Acquisition: Receives a company's name or LinkedIn URL as input.​ Data Extraction: Utilizes Bright Data to scrape the company's LinkedIn profile.​ Information Parsing: Processes the extracted HTML content to retrieve relevant company details.​ Summarization: Employs AI Google Gemini to generate a concise company story. Output Delivery: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location.​ Summarization Style: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points).​ Output Destinations: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.

Ranjan DailataBy Ranjan Dailata
1047

Deploy Docker n8n, API backend for WHMCS/WISECP

Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741970282211.png) You need to manually import this template into your n8n server. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741284912356.png) n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285064715.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741284983806.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285036996.png) Create an SSH Credential for accessing a server with Docker installed. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285118412.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285147192.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285198822.png) Modify Template Parameters In the Parameters block of the template, update the following settings: [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285369145.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741285412110.png) server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875704524.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875734754.png) nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875960357.png) [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741875978450.png) Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed. [](https://doc.puq.info/uploads/images/gallery/2025-03/image-1741876353319.png)

PUQcloudBy PUQcloud
745

Daily RAG research paper hub with arXiv, Gemini AI, and Notion

Fetch user-specific research papers from arXiv on a daily schedule, process and structure the data, and create or update entries in a Notion database, with support for data delivery Paper Topic: single query keyword Update Frequency: Daily updates, with fewer than 20 entries expected per day Tools: Platform: n8n, for end-to-end workflow configuration AI Model: Gemini-2.5-Flash, for daily paper summarization and data processing Database: Notion, with two tables — Daily Paper Summary and Paper Details Message: Feishu (IM bot notifications), Gmail (email notifications) Data Retrieval arXiv API The arXiv provides a public API that allows users to query research papers by topic or by predefined categories. arXiv API User Manual Key Notes: Response Format: The API returns data as a typical Atom Response. Timezone & Update Frequency: The arXiv submission process operates on a 24-hour cycle. Newly submitted articles become available in the API only at midnight after they have been processed. Feeds are updated daily at midnight Eastern Standard Time (EST). Therefore, a single request per day is sufficient. Request Limits: The maximum number of results per call (max_results) is 30,000, Results must be retrieved in slices of at most 2,000 at a time, using the max_results and start query parameters. Time Format: The expected format is [YYYYMMDDTTTT+TO+YYYYMMDDTTTT], TTTT is provided in 24-hour time to the minute, in GMT. Scheduled Task Execution Frequency: Daily Execution Time: 6:00 AM Time Parameter Handling (JS): According to arXiv’s update rules, the scheduled task should query the previous day’s (T-1) submittedDate data. Data Extraction Data Cleaning Rules (Convert to Standard JSON) Remove Header Keep only the 【entry】【/entry】 blocks representing paper items. Single Item Each 【entry】【/entry】 represents a single item. Field Processing Rules 【id】【/id】 ➡️ id Extract content. Example: 【id】http://arxiv.org/abs/2409.06062v1【/id】 → http://arxiv.org/abs/2409.06062v1 【updated】【/updated】 ➡️ updated Convert timestamp to yyyy-mm-dd hh:mm:ss 【published】【/published】 ➡️ published Convert timestamp to yyyy-mm-dd hh:mm:ss 【title】【/title】 ➡️ title Extract text content 【summary】【/summary】 ➡️ summary Keep text, remove line breaks 【author】【/author】 ➡️ author Combine all authors into an array Example: [ "Ernest Pusateri", "Anmol Walia" ] (for Notion multi-select field) 【arxiv:comment】【/arxiv:comment】 ➡️ Ignore / discard 【link type="text/html"】 ➡️ html_url Extract URL 【link type="application/pdf"】 ➡️ pdf_url Extract URL 【arxiv:primarycategory term="cs.CL"】 ➡️ primarycategory Extract term value 【category】 ➡️ category Merge all 【category】 values into an array Example: [ "eess.AS", "cs.SD" ] (for Notion multi-select field) Add Empty Fields github huggingface Data Processing Analyze and summarize paper data using AI, then standardize output as JSON. Single Paper Basic Information Analysis and Enhancement Daily Paper Summary and Multilingual Translation Data Storage: Notion Database Create a corresponding database in Notion with the same predefined field names. In Notion, create an integration under Integrations and grant access to the database. Obtain the corresponding Secret Key. Use the Notion "Create a database page" node to configure the field mapping and store the data. Notes "Create a database page" only adds new entries; data will not be updated. The updated and published timestamps of arXiv papers are in UTC. Notion single-select and multi-select fields only accept arrays. They do not automatically parse comma-separated strings. You need to format them as proper arrays. Notion does not accept null values, which causes a 400 error. Data Delivery Set up two channels for message delivery: EMAIL and IM, and define the message format and content. Email: Gmail GMAIL OAuth 2.0 – Official Documentation Configure your OAuth consent screen Steps: Enable Gmail API Create OAuth consent screen Create OAuth client credentials Audience: Add Test users under Testing status Message format: HTML (Model: OpenAI GPT — used to design an HTML email template) IM: Feishu (LARK) Bots in groups Use bots in groups

dongouBy dongou
696

Generate personalized promotion emails with GPT-5 and Gmail context analysis

Description: This sophisticated workflow automates personalized email campaigns for musicians and band managers. The system processes contact databases, analyzes previous Gmail conversation history, and uses AI to generate contextually appropriate emails tailored to different contact categories (venues, festivals, media, playlists). Key Features: Multi-category support: Bookers, festivals, media, playlist curators Conversation context analysis: Maintains relationship history from Gmail AI-powered personalization: Custom prompts for each contact type Multi-language support: Localized content and prompts Gmail integration: Automatic draft creation with signatures Bulk processing: Handle hundreds of contacts efficiently Use Cases: Album/single promotion campaigns Tour booking automation Festival submission management Playlist pitching campaigns Media outreach automation Venue relationship management Perfect For: Independent musicians and bands Music managers and booking agents Record labels with multiple artists PR agencies in music industry Festival organizers (for artist outreach) Required Setup: Credentials & APIs: Gmail OAuth2 (read messages + create drafts permissions) Google Sheets API (for AutomatizationHelper configuration) OpenAI API or compatible LLM (for content generation) Required Files: Contact Database (CSV): Your venue/media/festival contacts AutomatizationHelper (Google Sheets): Campaign configuration, prompts, links Example Data: 📁 Download Example Files The folder contains: Sample contact database (CSV) AutomatizationHelper template (CSV + Google Sheets) Detailed setup instructions (README) Data Structure: Contact Database Fields: venue_name - Organization name category - booker/festival/media/playlisting email_1 - Primary email (required) email_2 - Secondary email (optional, for CC) active - active/inactive (for filtering) language - EN/DE/etc. (for localization) AutomatizationHelper Fields: LANGUAGE - Language code CATEGORY - Contact type LATEST_SINGLE - Spotify/Apple Music link LATEST_VIDEO - YouTube/Vimeo link EPK - Electronic Press Kit URL SIGNATURE - HTML email signature PROMPT - AI prompt for this category SUBJECT - Email subject template Setup Instructions: Step 1: Prepare Your Data Download example files from the Google Drive folder Replace sample data with your real contacts and band information Customize AI prompts for your communication style Update signature with your contact details Step 2: Configure APIs Set up Gmail OAuth2 credentials in n8n Configure Google Sheets API access Add OpenAI API key for content generation Step 3: Import & Configure Workflow Import the workflow JSON Connect your credentials to respective nodes Update Google Sheets URL in AutomatizationHelper node Test with a small contact sample first Step 4: Customize & Run Adjust AI prompts in AutomatizationHelper for your style Update contact categories as needed Run workflow - drafts will be created in Gmail for review Tips: Start small: Test with 5-10 contacts first Review drafts: Always review AI-generated content before sending Update regularly: Keep your AutomatizationHelper current with latest releases Monitor responses: Track which prompts work best for different categories Language mixing: You can have contacts in multiple languages Important Notes: Emails are created as Gmail drafts - manual review recommended Respects Gmail API rate limits automatically Conversation history analysis works best with existing email threads HTML signatures are automatically added (Gmail API limitation workaround) Handles multiple languages simultaneously Maintains conversation context across campaigns Generates unique content for each contact --- Template Author: Questions or need help with setup? Email: xciklv@gmail.com LinkedIn: https://www.linkedin.com/in/vaclavcikl/

Václav ČiklBy Václav Čikl
105

Generate AI search visibility datasets with Claude and GPT for tracking platforms

This n8n workflow automatically generates a comprehensive dataset of 50 AI search prompts tailored to a specific company. It combines AI-powered company research with structured prompt generation to create monitoring queries for tracking brand visibility across AI search engines like ChatGPT, Perplexity, Claude, and Gemini. The dataset is ready for use and can be uploaded to any major AI search analytics platforms (like ALLMO.ai,...) or used in your own model. Who's it for & Use Cases SEO/GEO Marketing teams, Growth Managers, GTM engineers and Founders who want to: Create custom prompt datasets for visibility tracking platforms like ALLMO.ai Generate industry-specific search queries for AI model monitoring How It Works Phase 1: Company Research Start the workflow via the form and input your company name and website URL GPT-5 Mini with web search collects company information, including buyer personas, key features, and value proposition Phase 2: Prompt Generation Claude Sonnet 4.5 generates and refines natural language prompts based on Phase 1 findings English prompts are automatically translated into German Phase 3: Export & Implementation Wait for processing (~total of 2-5 minutes depending on website complexity) English and German prompt sets are merged with metadata and structured into table format Download the CSV file containing 50 prompts ready for import into AI Search monitoring systems (allmo.ai, etc.) How to Setup Just enter your API credentials in the Claude and ChatGPT Nodes. How to Expand You can update the system prompts for the "prompt writing engine" to create more prompts. You can update or add more translations. Output Structure: 25 English prompts + 25 German prompts (can be changed flexibly). Each prompt tagged with: company name, industry, category, language, and AI model for simple tracking. Ready for direct import into any GEO/ALLMO visibility tracking system. Requirements API Credentials: Anthropic API (Claude Sonnet 4.5) OpenAI API (GPT-5 Mini with web search capability) Data Input: Valid company website URL (publicly accessible) Company name as it should appear in tracking

Niclas AuninBy Niclas Aunin
44

Classify and convert GitHub issues to Jira tickets with OpenAI

AI-Powered GitHub Issue to Jira Ticket Automation Bridge the gap between your development and project management workflows with this intelligent n8n template. This isn't just a simple sync; it uses an AI agent to analyze, classify, and intelligently route new GitHub issues into the correct Jira ticket type, saving you countless hours of manual triage. --- 🚀 Key Features AI-Powered Issue Classification: Leverages an AI Agent (powered by OpenAI) to analyze the content of a new GitHub issue and determine its type (e.g., bug, task, improvement). Intelligent Routing: Automatically creates the corresponding ticket type in Jira. Bugs in GitHub become Bugs in Jira; other issues become Tasks. Seamless Integration: Triggers instantly when a new issue is created in your specified GitHub repository. Rich Data Transfer: Migrates the issue title and body directly into the Jira ticket description for full context. Structured AI Output: Uses a structured output parser to ensure the AI's classification is reliable and consistent. Beginner-Friendly & Educational: The workflow is annotated with sticky notes, explaining each step of the process, making it a great tool for learning how to use AI in n8n. --- ⚙️ How It Works The workflow is designed for clarity and power, moving from issue creation to ticket generation in four automated steps. Trigger: New GitHub Issue The GitHub Trigger node constantly listens for new issues being created in your designated repository. Once an issue is opened, the workflow springs into action. Analyze: AI Classification The issue's title and body are passed to an AI Agent. Using a prompt designed for classification and an OpenAI Chat Model, the agent determines if the issue is a bug, task, or another category. A Structured Output Parser ensures the AI returns clean, usable JSON data (e.g., {"type": "bug"}). Route: Conditional Logic An IF node checks the structured output from the AI agent. If the type is "bug," the workflow proceeds down the "bug" path. Otherwise, it follows the default path for "tasks." Create: Jira Ticket Generation Depending on the route, the corresponding Jira node is activated. A new ticket is created in your specified Jira project with the appropriate issue type (Bug or Task) and includes the full context from the original GitHub issue. --- 🛠️ Setup Steps & Credentials To get this AI-powered workflow running, you'll need to configure a few credentials: GitHub: Create a GitHub credential in n8n. In the GitHub Trigger node, select your credential and specify the owner (your GitHub username or organization) and the repository you want to monitor. OpenAI: Obtain an API Key from platform.openai.com. Create an OpenAI credential in n8n. In the OpenAI Chat Model node, select your newly created credential. You can also experiment with different models like gpt-4.1-mini for speed or gpt-4o for higher accuracy. Jira: Create a Jira API token from your Atlassian account settings. Create a Jira credential in n8n using your email, API token, and Atlassian domain. In both the Create Jira Ticket (Bug) and Create Jira Ticket (Task) nodes, select your Jira credential and set your target project key. --- 💡 Customization & Learning This workflow is a powerful starting point. Here are a few ways you can customize it and learn more: Expand Classification: Modify the AI Agent's prompt and the IF node to handle more issue types, like improvement, documentation, or feature-request. Add More Data: Enhance the Jira nodes to include labels, assignees, or priority levels based on the AI's output or the original GitHub issue's properties. Swap AI Models: Try different language models by replacing the OpenAI node with one for Google Gemini, Anthropic Claude, or others supported by n8n. Error Handling: Add a path for what to do if the AI fails to classify an issue, such as sending a notification to a Slack channel for manual review. --- 📋 Requirements An active n8n instance. GitHub API credentials. OpenAI API credentials. Jira API credentials.

Issam AGGOURBy Issam AGGOUR
34
All templates loaded