8 templates found
Category:
Author:
Sort:

Free AI image generator - n8n automation workflow with Gemini/ChatGPT

This n8n template demonstrates how to use AI to generate custom images from scratch - fully automated, prompt-driven, and ready to deploy at scale. Use cases are many: You can use it for marketing visuals, character art, digital posters, storyboards, or even daily image generation for your personal purposes. How It Works The flow is triggered by a chat message in N8N or via Telegram. The default image size is 1080 x 1920 pixels. To use a different size, update the values in the “Fields - Set Values” node before triggering the workflow. The input is parsed into a clean, structured prompt using a multi-step transformation process. Our AI Agent sends the final prompt to Google Gemini’s image model for generation (you can also integrate with OpenAI or other chat models). The raw image data created by the AI Agent will be run through a number of codes to make sure it's feasible for your preview if needed and downloading. Then, we use an HTTP node to fetch the result so you can preview the image. You can send it back to the chat message in N8N or Telegram, or save it locally to your disk. How To Use Download the workflow package. Import the package into your N8N interface. Set up the credentials in the following nodes for tool access and usability: "Telegram Trigger"; "AI Agent - Create Image From Prompt"; "Telegram Response" or "Save Image To Disk" (based on your wish). Activate the "Telegram Response" OR "Save Image To Disk" node to specify where you want to save your image later. Open the chat interface (via N8N or Telegram). Type your image prompt or detailed descriptions and send. Wait for the process to run and finish in a few seconds. Check the result in your desired saving location. Requirements Google Gemini account with image generation access. Telegram bot access and chat setup (optional). Connection to local storage (optional). How To Customize We’re setting the default image size to 1080 x 1920 pixels and the default image model to "flux". You can customize both of these values in the “Fields – Set Values” node. Supported image model options include: "flux", "kontext", "turbo", and "gptimage". In the “AI Agent – Create Image From Prompt” node, you can also change the AI chat model. By default, it uses Google Gemini, but you can easily replace it with OpenAI ChatGPT, Microsoft AI Copilot, or any other compatible provider. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle

Agent CircleBy Agent Circle
78600

Save your workflows into a GitHub repository

Basics Provides a mechanism to save all your workflows into a github repository and path (of your choosing). These can then be shared through your entire org and used to track changes (if you make any sad 'oopsies'. Flow Obtains and creates listing of currently configured workflows. Iterates through each workflow looking at the following Github source (if present) Actual workflow code (from N8N) Workflow code is sorted and compared for any changes If changed (or new) the workflows are saved / archived into github. Configuration Most of the configuration is done in the Globals node which houses the repo detail for github nodes. The only other dependency is that it by default looks for a GitHub credential, if you use something other than that precise wording you will need to change the credential used on the respective nodes. We gave it 'Manage' rights, but that was only so that it was able to override a requirement for checks to complete? Most would probably only need 'Write' privileges. Background Well, so we initially started using N8N just as a kubernetes-based service housed with its DB running inside the pod. Worked great for getting to know N8N and we jut kept all our workflows and credentials listed in a readme. Fast forward about a year... We have migrated this into our 'production' toolsets and maintain a bunch of team worflows inside it (not company-wide, but LOTS of team fun). While trying to spin a copy of our production RDS database, the ++actual++ production database was deleted, and in doing so AWS was nice enough to wipe our snapshots too!! Yea! Thankfully it only took us a few hours to get everything back up and running thanks to this, so I'm sharing it for everyone to benefit. We have used it to restore old workflows, changes, and now to test our full DR proceedures! (Ok, I might have taken that a bit far)

Brian BurnettBy Brian Burnett
13020

Create an automated customer support assistant with GPT-4o and GoHighLevel SMS

📌 AI Agent via GoHighLevel SMS with Website-Based Knowledgebase This n8n workflow enables an AI agent to interact with users through GoHighLevel SMS, leveraging a knowledgebase dynamically built by scraping the company's website. --- ❓ Problem It Solves Traditional customer support systems often require manual data entry and lack real-time updates from the company's website. This workflow automates the process by: Scraping the company's website at set intervals to update the knowledgebase. Integrating with GoHighLevel SMS to provide users with timely and accurate information. Utilizing AI to interpret user queries and fetch relevant information from the updated knowledgebase. --- 🧰 Pre-requisites Before deploying this workflow, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any compatible AI model). A Bright Data account with Web Unlocker setup. A GoHighLevel SMS LeadConnector account. A GoHighLevel Marketplace App configured with the necessary scopes. Installed n8n-nodes-brightdata community node for Bright Data integration (if self-hosted). --- ⚙️ Setup Instructions Install the Bright Data Community Node in n8n For self-hosted n8n instances: Navigate to Settings → Community Nodes. Click on Install. In the search bar, enter n8n-nodes-brightdata. Select the node from the list and click Install. Docs: https://docs.n8n.io/integrations/community-nodes/installation/gui-install Configure Bright Data Credentials Obtain your API key from Bright Data. In n8n, go to Credentials → New, select HTTP Request. Set authentication to Header Auth. In Name, enter Authorization. In Value, enter Bearer <yourapikeyfromBright_Data>. Save the credentials. Configure OpenAI Credentials Add your OpenAI API key to the relevant nodes. If you want to use a different model, replace all OpenAI nodes accordingly. Set Up GoHighLevel Integration a. Create a GoHighLevel Marketplace App Go to https://marketplace.gohighlevel.com Click My Apps → Create App Set Distribution Type to Sub-Account Add the following scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Add your n8n OAuth Redirect URL as a redirect URI in the app settings. Save and copy the Client ID and Client Secret. b. Configure GoHighLevel Credentials in n8n Go to Credentials → New Choose OAuth2 API Input: Client ID Client Secret Authorization URL: https://auth.gohighlevel.com/oauth/authorize Access Token URL: https://auth.gohighlevel.com/oauth/token Scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Save and authenticate to complete setup. Docs: https://docs.n8n.io/integrations/builtin/credentials/highlevel --- 🔄 Workflow Functionality (Summary) Scheduled Scraping: Scrapes website at user-defined intervals. Edit Fields node: User defines the homepage or site to scrape. Bright Data Node (self-hosted) OR HTTP Node (cloud users) used to perform scraping. Knowledgebase Update: The scraped content is stored or indexed. GoHighLevel SMS: Incoming user queries are received through SMS. AI Processing: AI matches queries to relevant content. Response Delivery: AI-generated answers are sent back via SMS. --- 🧩 Use Cases Customer Support Automation: Provide instant, accurate responses. Lead Qualification: Automatically answer potential customer inquiries. Internal Knowledge Distribution: Keep staff updated via SMS based on website info. --- 🛠️ Customization Scraping URLs: Adjust targets in the Edit Fields node. Model Swap: Replace OpenAI nodes to use a different LLM. Format Response: Customize output to match your tone or brand. Other Channels: Expand to include chat apps or email responses. Vector Databases: It is advisable to store the data into a third-party vector database services like Pinecone, Supabase, etc. Chat Memory Node: This workflow is using Redis as a chat memory but you can use N8N built-in chat memory. --- ✅ Summary This n8n workflow combines Bright Data’s scraping tools and GoHighLevel’s SMS interface with AI query handling to deliver a real-time, conversational support experience. Ideal for businesses that want to turn their website into a live knowledge source via SMS, this agent keeps itself updated, smart, and customer-ready.

Cyril Nicko GasparBy Cyril Nicko Gaspar
3449

E-commerce product fine-tuning with Bright Data and OpenAI

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates the process of scraping product data from e-commerce websites and using it to fine-tune a custom OpenAI GPT model for generating high-quality marketing copy and product descriptions. Main Use Cases Fine-tune OpenAI models with real product data from hundreds of supported e-commerce websites for marketing content generation. Create custom AI models specialized in writing compelling product descriptions across different industries and platforms. Automate the entire pipeline from data collection to model training using Bright Data's extensive scraper library. Generate marketing copy using your custom-trained model via an interactive chat interface. How it works The workflow operates in two main phases: model training and model usage, organized into these stages: Data Collection & Processing Manually triggered to start the fine-tuning process. Uses Bright Data's web scraper to extract product information from any supported e-commerce platform (Amazon, eBay, Shopify stores, Walmart, Target, and hundreds of other websites). Collects product titles, brands, features, descriptions, ratings, and availability status from your chosen platform. Easily customizable to scrape from different websites by simply changing the dataset configuration and product URLs. Training Data Preparation A Code node processes the scraped product data to create training examples in OpenAI's required JSONL format. For each product, generates a complete training example with: System message defining the AI's role as a marketing assistant. User prompt containing specific product details (title, brand, features, original description snippet). Assistant response providing an ideal marketing description template. Compiles all training examples into a single JSONL file ready for OpenAI fine-tuning. Model Fine-Tuning Uploads the training file to OpenAI using the OpenAI File Upload node. Initiates a fine-tuning job via HTTP Request to OpenAI's fine-tuning API using the GPT-4o-mini model as the base. The fine-tuning process runs on OpenAI's servers to create your custom model. Interactive Chat Interface Provides a chat trigger that allows real-time interaction with your fine-tuned model. An AI Agent node connects to your custom-trained OpenAI model. Users can chat with the model to generate product descriptions, marketing copy, or other content based on the training. Custom Model Integration The OpenAI Chat Model node is configured to use your specific fine-tuned model ID. Delivers responses trained on your product data for consistent, high-quality marketing content. Summary Flow: Manual Trigger → Scrape E-commerce Products (Bright Data) → Process & Format Training Data (Code) → Upload Training File (OpenAI) → Start Fine-Tuning Job (HTTP Request) | Parallel: Chat Trigger → AI Agent → Custom Fine-Tuned Model Response Benefits: Fully automated pipeline from raw product data to trained AI model. Works with hundreds of different e-commerce websites through Bright Data's extensive scraper library. Creates specialized models trained on real e-commerce data for authentic marketing copy across various industries. Scalable solution that can be adapted to different product categories, niches, or websites. Interactive chat interface for immediate access to your custom-trained model. Cost-effective fine-tuning using OpenAI's most efficient model (GPT-4o-mini). Easily customizable with different websites, product URLs, training prompts, and model configurations. Setup Requirements: Bright Data API credentials for web scraping (supports hundreds of e-commerce websites). OpenAI API key with fine-tuning access. Replace placeholder credential IDs and model IDs with your actual values. Customize the product URLs list and Bright Data dataset for your specific website and use case. The workflow can be adapted for any e-commerce platform supported by Bright Data's scraping infrastructure.

Daniel ShashkoBy Daniel Shashko
1374

Monitor SSL certificate expiry with Google Sheets and email alerts

🔒 SSL Certificate Monitoring & Expiry Alert with Spreadsheet [FREE APIs] ✅ What This Workflow Does This n8n template automatically monitors SSL certificates of websites listed in a Google Sheet and sends email alerts if any are expiring within 14 days. It helps ensure you avoid downtime, security issues, and trust warnings due to expired certificates. --- 🧩 Key Features 📅 Weekly Automation: Runs every Monday at 7:00 AM (configurable). 📄 Google Sheets Integration: Fetches and updates data in a spreadsheet. 🔍 SSL Check via API: Uses ssl-checker.io to get certificate details. ⚠️ SSL Expiry Filter: Identifies certificates expiring within 14 days. 📧 Email Alerts: Sends notifications for certificates close to expiration. --- 📂 Input Spreadsheet Format Your Google Sheet should have the following columns: | No | Name | Link | SSL Issued On | SSL Expired On | SSL Status | |----|-----------------|-----------------------|-------------------|-------------------|------------| | 1 | Example Site | https://example.com | 2024-07-01 | 2025-07-01 | Valid | | 2 | My Blog | https://myblog.org | 2024-07-05 | 2024-07-20 | Expiring | Each row should include a valid website URL in the Link column. --- 🛠️ How It Works Scheduled Trigger Executes weekly (Monday 7:00 AM). Fetch Website List Reads all website entries from the Google Sheet. Check SSL Certificates Uses ssl-checker.io API to retrieve certificate details for each website. Update Spreadsheet Writes "Issued On" and "Expired On" fields back to the spreadsheet. Evaluate SSL Expiry Filters for certificates expiring within 14 days. Check Condition Determines whether to send alerts based on filtered results. Send Email Alert Notifies via email if any certificates are expiring soon. --- 📬 Example Email Output Subject: ⚠️ ALERT!! SSL EXPIRED SSL certificates expiring soon: example.com (expires in 5 days) anotherdomain.net (expires in 3 days) 🧰 Setup Requirements A Google Sheet with the correct columns and website links. SMTP credentials to send alert emails.

Agus NaresthaBy Agus Narestha
588

Client onboarding automation: Tally Forms to Google Drive, Notion & Slack

📝 Automation: Instantly Onboard New Clients from Tally Form to Notion, Google Drive & Slack This automation streamlines the client onboarding process by integrating Tally, Notion, Google Drive, and Slack. When a potential client submits a Tally form, the automation is triggered via a webhook, automatically handling all onboarding steps without manual intervention. ⚙️ How It Works – Step-by-Step Form Submission Triggered A new Tally form submission is received via a webhook. Client Data Extraction The automation extracts essential client details from the form, including: -Name -Email -Project Type -Budget Google Drive Folder Creation A dedicated Google Drive folder is generated using the client’s name and project type for storing onboarding assets. Notion Database Entry Creation A new item is added to a specified Notion database, storing: Client information Project scope Folder link Slack Team Notification A Slack message is sent to your designated team channel containing all onboarding details, ensuring the team is informed instantly. ✅ Pre-Conditions / Requirements A published Tally form collecting client data. A connected Google Drive account with folder creation permissions. An existing Notion database with columns for name, email, budget, etc. A Slack workspace with an active bot/token integrated with the automation tool. 🛠️ Notion Database Structure Your Notion database should include at least the following fields: -Name (Text) -Email (Email) -Project Type (Select) -Budget (Select) -Onboarding Folder Link (URL) 🧩 Customization Guidance You can modify the Google Drive folder naming convention to include a timestamp or custom ID. Adjust Slack message formatting to include project-specific tags or mention specific team members. Extend the Notion entry to include more fields like project deadline or contact notes.

Muhammad AhmadBy Muhammad Ahmad
549

Extract and verify book titles from bookshelf photos using GPT-4o and Google Books

Use Case: Analyze images with multiple subjects. In this use case I have a bookshelf and am extracting and verifying book titles/authors from a bookshelf photo. How it works: 1) Webhook receives an image url from a front end in which a user can upload a picture. In this use case, it is an image of a book shelf. 2) Edit Field (Set): Saves image in a consistent location so AI can find it. 3) Analyze Image: Image is analyzed. Extracts titles from the book spines 4) Code: Splits extracted subjects to single item to be able to validate each item separately. Books are individualized to their own entity 5) HTTP Request validates each subject. Queries Google Books to validate books in case only partial titles were found. 6) Edit Field (Set): Tidies the result. 7) Code: Aggregates and deduplicates Titles and authors are aggregate into a list 8) Respond to Webhook returns list to front end How to use: Use with a frontend that can capture images and receive back the result. For this use case Supabase was used to store images from which the image analyzer could reference.

Arlene MartinBy Arlene Martin
299

Generate illustrated stories with GPT-4, DALL-E 3 and Firebase

Generate complete illustrated stories using AI. This workflow creates engaging narratives with custom DALL-E 3 images for each scene and saves everything to Firebase. How it works User fills out a form with story topic, language, art style, and number of scenes GPT-4 generates a complete story with scenes, characters, and image prompts DALL-E 3 creates unique illustrations for each scene Images are uploaded to Firebase Storage for permanent hosting Complete story data is saved to Firestore Returns JSON with the full story and image URLs Set up steps (10-15 minutes) Add your OpenAI API credentials Add Google Service Account credentials for Firebase Update 3 variables in the Code nodes: OPENAIAPIKEY, FIREBASEBUCKET, FIREBASEPROJECT_ID Activate and test with the form Features 12 languages supported 10 art styles to choose from 1-12 scenes per story Automatic image hosting on Firebase Full story saved to Firestore database

Victor Manuel Lagunas FrancoBy Victor Manuel Lagunas Franco
79
All templates loaded