Generate logos and images with consistent visual styles using Imagen 3.0
This n8n template allows you to use AI to generate logos or images which mimic visual styles of other logos or images. The model used to generate the images is Google's Imagen 3.0. With this template, users will be able to automate design and marketing tasks such as creating variants of existing designs, remixing existing assets to validate different styles and explore a range of designs which would have been otherwise too expensive and time-consming previously. How it works A form trigger is used to capture the source image to reference styles from and a prompt for the target image to generate. The source image is passed to Gemini 2.0 to be analysed and its visual style and tone extracted as a detailed description. This visual style description is then combined with the user's initial target image prompt. This final prompt is given to Imagen 3.0 to generate the images. A quick webpage is put together with the generated images to present back to the user. If the user provided an email address, a copy of this HTML page will be sent. How to use Ensure the workflow is live to share the form publicly. The source image must be accessible to your n8n instance - either a public image of the internet or within your network. For best results, select a source image which has strong visual identity as these will allow the LLM to better describe it. For your prompt, refer to the imagen prompt guide found here: https://ai.google.dev/gemini-api/docs/image-generationimagen-prompt-guide Requirements Gemini for LLM and Imagen model. Cloudinary for image CDN. Gmail for email sending. Customising this workflow Feel free to swap any of these out for tools and services you prefer. Want to fully automate? Switch the form trigger for a webhook trigger!
Natural language database queries with dual-agent AI & PostgreSQL integration
AI Database Assistant with Smart Query's & PostgreSQL Integration Description: 🚀 Transform Your Database into an Intelligent AI Assistant This workflow creates a smart database assistant that safely handles natural language queries without crashing your system. Features dual-agent architecture with built-in query limits and PostgreSQL optimization – perfect for commercial applications! ✅ Ideal for: SaaS developers building database search features 🔍 Database administrators providing safe AI access 🛡️ Business teams needing user-friendly data queries 📊 Anyone wanting ChatGPT-like database interaction 🤖 🔧 How It Works 1️⃣ User asks a question – "Show me top 10 popular products" 2️⃣ Main AI Agent – Interprets the request and ensures safety limits 3️⃣ SQL Sub-Agent – Generates precise PostgreSQL queries 4️⃣ Database executes – Returns formatted, limited results safely ⚡ Setup Instructions 1️⃣ Prepare Your Database Ensure PostgreSQL is accessible from n8n Note your table structure and column names Set up database connection credentials 2️⃣ Customize the Templates Replace [YOURTABLENAME] with your actual table name Update [YOUR_FIELDS] with your column names Modify examples to match your use case Important: Keep all LIMIT clauses intact! 3️⃣ Configure the Agents Copy Main Agent system message to your primary AI node Copy Sub-Agent system message to your SQL generator node Connect the sub-workflow between both agents 4️⃣ Test & Deploy Test with sample queries like "Show me 5 recent items" Verify query limits work (max 50 results) Deploy and monitor performance 🎯 Why Use This Workflow? ✔️ System Protection – Built-in limits prevent crashes from large queries ✔️ Natural Language – Users ask questions in plain English ✔️ Commercial Ready – Generic templates work with any database ✔️ Dual-Agent Safety – Smart interpretation + precise SQL generation ✔️ PostgreSQL Optimized – Handles complex schemas and data types 🚨 Critical Features Query Limits: Default 10, maximum 50 results (can be modified) Error Prevention: No unlimited data retrieval Smart Routing: Natural language → Safe SQL → Formatted results Customizable: Works with any PostgreSQL database schema 🔗 Start building your AI database assistant today – safe, smart, and scalable!
B2B lead qualification
This automated n8n workflow qualifies B2B leads via voice calls using the VAPI API and integrates the collected data into Google Sheets. It triggers when a new lead’s phone number is added, streamlining lead qualification and data capture. What is VAPI? VAPI is an API service that enables voice call automation, used here to qualify leads by capturing structured data through interactive calls. Good to Know VAPI API calls may incur costs based on usage; check VAPI pricing for details. Ensure Google Sheets access is properly authorized to avoid data issues. Use credential fields for the HTTP Request node 'Bearer token' instead of hardcoding. Use a placeholder Google Sheet document ID (e.g., "your-sheet-id-placeholder") to avoid leaking private data. How It Works Detect when a new phone number is added for a lead using the New Lead Captured node. Use the Receive Lead Details from VAPI node to capture structured data (name, company, challenges) via a POST request. Trigger an outbound VAPI call to qualify the lead with the Initiate Voice Call (VAPI) node. Store the collected data into a Google Sheet using the Save Qualified Lead to CRM Sheet node. Send a success response back to VAPI with the Send Call Data Acknowledgement node. How to Use Import the workflow into n8n. Configure VAPI API credentials in the HTTP Request node using credential fields. Set up Google Sheets API access and authorize the app. Create a Google Sheet with the following columns: Name (text), Company (text), Challenges (text), Date (date). Test with a sample lead phone number to verify call initiation and data storage. Adjust the workflow as needed and retest. Requirements VAPI API credentials Google Sheets API access Customizing This Workflow Modify the Receive Lead Details from VAPI node to capture additional lead fields or adjust call scripts for specific industries.
Get real-time NFT insights with OpenSea AI-powered NFT agent tool
Instantly access NFT metadata, collections, traits, contracts, and ownership details from OpenSea! This workflow integrates GPT-4o-mini AI, OpenSea API, and n8n automation to provide structured NFT data for traders, collectors, and investors. How It Works Receives user queries via Telegram, webhooks, or another connected interface. Determines the correct API tool based on the request (e.g., user profile, NFT metadata, contract details). Retrieves data from OpenSea API (requires API key). Processes the information using an AI-powered NFT insights engine. Returns structured insights in an easy-to-read format for quick decision-making. What You Can Do with This Agent 🔹 Retrieve OpenSea User Profiles → Get user bio, links, and profile info. 🔹 Fetch NFT Collection Details → Get collection metadata, traits, fees, and contract info. 🔹 Analyze NFT Metadata → Retrieve ownership, rarity, and trait-based pricing. 🔹 Monitor NFTs Owned by a Wallet → Track all NFTs under a specific account. 🔹 Retrieve Smart Contract Data → Get blockchain contract details for an NFT collection. 🔹 Identify Valuable Traits → Fetch NFT trait insights and rarity scores. Example Queries You Can Use ✅ "Get OpenSea profile for 0xA5f49655E6814d9262fb656d92f17D7874d5Ac7E." ✅ "Retrieve details for the 'Azuki' NFT collection." ✅ "Fetch metadata for NFT 5678 from 'Bored Ape Yacht Club'." ✅ "Show all NFTs owned by 0x123... on Ethereum." ✅ "Get contract details for NFT collection 'CloneX'." Available API Tools & Endpoints 1️⃣ Get OpenSea Account Profile → /api/v2/accounts/{addressorusername} (Retrieve user bio, links, and image) 2️⃣ Get NFT Collection Details → /api/v2/collections/{collectionslug} (Get collection-wide metadata)_ 3️⃣ Get NFT Metadata → /api/v2/chain/{chain}/contract/{address}/nfts/{identifier} (Retrieve individual NFT details) 4️⃣ Get NFTs Owned by Account → /api/v2/chain/{chain}/account/{address}/nfts (List all NFTs owned by a wallet) 5️⃣ Get NFTs by Collection → /api/v2/collection/{collectionslug}/nfts (Retrieve all NFTs from a specific collection)_ 6️⃣ Get NFTs by Contract → /api/v2/chain/{chain}/contract/{address}/nfts (Retrieve all NFTs under a contract) 7️⃣ Get Payment Token Details → /api/v2/chain/{chain}/paymenttoken/{address} (Fetch info on payment tokens used in NFT transactions)_ 8️⃣ Get NFT Traits → /api/v2/traits/{collectionslug} (Retrieve collection-wide trait data)_ Set Up Steps Get an OpenSea API Key Sign up at OpenSea API and request an API key. Configure API Credentials in n8n Add your OpenSea API key under HTTP Header Authentication. Connect the Workflow to Telegram, Slack, or Database (Optional) Use n8n integrations to send alerts to Telegram, Slack, or save results to Google Sheets, Notion, etc. Deploy and Test Send a query (e.g., "Azuki latest sales") and receive instant NFT market insights! Unlock powerful NFT analytics with AI-powered OpenSea insights—start now!
Real-time X post monitoring & auto-categorization with Airtop
Monitor X for Relevant Posts Use Case This automation monitors X (formerly Twitter) search pages in real time and extracts high-signal posts that match your categories of interest. It’s ideal for community engagement, lead discovery, thought leadership tracking, or competitive analysis. What This Automation Does Given a search URL and a list of categories, it: Logs into X using Airtop Opens the specified search URL Scrolls through the results Extracts up to 10 valid, English-language posts Filters and classifies each post by category (or marks as [NA] if unrelated) Returns the structured results as JSON Input parameters: airtop_profile — An Airtop browser profile authenticated on X x_url — X search URL (e.g., https://x.com/search?q=ai agents&f=live) relevant_categories — Text-based list of categories to classify posts (e.g., "Web automation use cases", "Thought leadership") Output: A JSON array of posts, each with: writer time text url category How It Works Trigger: This workflow is triggered by another workflow (e.g., a community engagement pipeline). Input Setup: Accepts the Airtop profile, search URL, and categories to use for classification. Session: Starts a browser session using the Airtop profile. Window Navigation: Opens the provided X search URL. Extraction: Scrapes up to 10 posts with /status/ in the URL and text in English. Classification: Each post is labeled with a category if relevant, or [NA] otherwise. Filtering: Discards [NA] posts. Output: Returns the list of classified posts. Setup Requirements Airtop profile with an active X login. Airtop API key connected in n8n. List of category definitions to guide post classification (used in prompt). Next Steps Feed into Engagement Workflows: Pass the results to workflows that reply, retweet, or track posts. Use in Slack Alerts: Push classified posts into Slack channels for review and reaction. Customize Classifier: Refine the categorization logic to include sentiment or company mentions. Read more about Monitoring X for Relevant Posts
Generate student course schedules based on prerequisites with GPT and Google Sheets
Create a Fall 2025 course schedule for each student based on what they’ve already completed, catalog prerequisites, and term availability (Fall/Both). Reads students from Google Sheets → asks an AI agent to select exactly 5 courses (target 15–17 credits, no duplicates, prereqs enforced) → appends each student’s schedule to a schedule tab. --- 🧠 Summary Trigger: Manual — “When clicking ‘Execute workflow’” I/O: Google Sheets in → OpenAI decisioning → Google Sheets out Ideal for: Registrars, advisors, degree-planning prototypes --- ✅ What this template does Reads: StudentID, Name, Program, Year, CompletedCourses (pipe-separated CourseIDs) from Sheet1 Decides: AI Scheduling Agent chooses 5 courses per student following catalog rules and prerequisites Writes: Appends StudentID + Schedule strings to schedule worksheet Credits target: 15–17 total per term Catalog rules (enforced in the agent prompt): Use Fall or Both courses for Fall 2025 Enforce AND prereqs (e.g., CS-102|CS-103 means both) Priority: Major Core → Major Elective → Gen Ed (include Gen Ed if needed) No duplicates or already-completed courses Prefer 200-level progression when prereqs allow --- ⚙️ Setup (only 2 steps) 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2), sign in and grant access In the Google Sheets nodes, select your spreadsheet and these tabs: Sheet1 (input students) schedule (output) > Example spreadsheet (replace with your own): > - Input: .../editgid=0 > - Output: .../editgid=572766543 2) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API, paste your key In the OpenAI Chat Model node, select that credential and a chat model (e.g., gpt-4o) --- 📥 Required input (Sheet1) Columns: StudentID, Name, Program, Year, CompletedCourses CompletedCourses: pipe-separated IDs (e.g., GEN-101|GEN-103|CS-101) Program names should match those referenced in the embedded catalog (e.g., Computer Science BS, Business Administration BBA, etc.) --- 📤 Output (schedule tab) Columns: StudentID Schedule → a selected course string (written one row per course after splitting) --- 🧩 Nodes in this template Manual Trigger → Get Student Data (Google Sheets) → Scheduling Agent (OpenAI) → Split Schedule → Set Fields → Clear sheet → Append Schedule (Google Sheets) --- 🛠 Configuration tips If you rename tabs, update both Get Student Data and Append Schedule nodes Keep CompletedCourses consistent (use | as the delimiter) To store rationale as well, add a column to the output and map it from the agent’s JSON --- 🧪 Test quickly 1) Add 2–3 sample student rows with realistic CompletedCourses 2) Run the workflow and verify: 5 course rows per student in schedule Course IDs respect prereqs & Fall/Both availability Credits sum ~15–17 --- 🧯 Troubleshooting Sheets OAuth error: Reconnect “Google Sheets (OAuth2)” and re-select the spreadsheet & tabs Empty schedules: Ensure CompletedCourses uses | and that programs/courses align with the provided catalog names Prereq violations: Check that students actually have all AND-prereqs in CompletedCourses OpenAI errors (401/429): Verify API key, billing, and rate-limit → retry with lower concurrency --- 🔒 Privacy & data handling Student rows are sent to OpenAI for decisioning. Remove or mask any fields you don’t want shared. Google Sheets retains input/output. Use spreadsheet sharing controls to limit access. --- 💸 Cost & performance OpenAI: Billed per token; cost scales with student count and prompt size Google Sheets: Free within normal usage limits Runtime: Typically seconds to a minute depending on rows and rate limits --- 🧱 Limitations & assumptions Works for Fall 2025 only (as written). For Spring, update availability rules in the agent prompt Assumes catalog in the agent system message is your source of truth Assumes Program names match catalog variants (case/spacing matter for clarity) --- 🧩 Customization ideas Add a Max Credits column to cap term credits per student Include Rationale in the sheet for advisor review Add a “Gen Ed needed?” flag per student to force at least one Gen Ed selection Export to PDF or email the schedules to advisors/students --- 🧾 Version & maintenance n8n version: Tested on recent n8n Cloud builds (2025) Community nodes: Not required Maintenance: Update the embedded catalog and offerings each term; keep prerequisites accurate --- 🗂 Tags & category Category: Education / Student Information Systems Tags: scheduling, registrar, google-sheets, openai, prerequisites, degree-planning, catalog, fall-term --- 🗒 Changelog v1.0.0 — Initial release: Sheets in/out, Fall 2025 catalog rules, prereq enforcement, 5-course selection, credits target --- 📬 Contact Need help customizing this (e.g., cohort logic, program-specific rules, adding rationale to the sheet, or emailing PDFs)? 📧 rbreen@ynteractive.com 🔗 Robert Breen — https://www.linkedin.com/in/robert-breen-29429625/ 🌐 ynteractive.com — https://ynteractive.com
Publish HTML content with GitHub Gist and HTML preview
📌 Who’s it for This subworkflow is designed for developers, AI engineers, or automation builders who generate dynamic HTML content in their workflows (e.g. reports, dashboards, emails) and want a simple way to host and share it via a clean URL, without spinning up infrastructure or uploading to a CMS. It’s especially useful when combined with AI agents that generate HTML content as part of a larger automated pipeline. ⚙️ What it does This subworkflow: Accepts raw HTML content as input. Creates a new GitHub Gist with that content. Returns the shareable Gist URL, which can then be sent via Slack, Telegram, email, etc. The result is a lightweight, fast, and free way to publish AI-generated HTML (such as reports, articles, or formatted data outputs) to the web. 🛠️ How to set it up Add this subworkflow to any parent workflow where HTML is generated. Pass in a string of valid HTML via the html input parameter. Configure the GitHub credentials in the HTTP node using an access token with gist scope. ✅ Requirements GitHub account and personal access token with gist permissions 🔧 How to customize Change the filename (report.html) if your use case needs a different format or extension. Add metadata to the Gist (e.g., description, tags).
Analyze Reddit content and comments for sentiment with Deepseek AI
Reddit Sentiment Analysis with AI-Powered Insights Automatically analyze Reddit posts and comments to extract sentiment, emotional tone, and actionable community insights using AI. This powerful n8n workflow combines Reddit's API with advanced AI sentiment analysis to help community managers, researchers, and businesses understand public opinion and engagement patterns on Reddit. Get structured insights including sentiment scores, toxicity levels, trending concerns, and moderation recommendations. Features Comprehensive Sentiment Analysis: Categorizes content as Positive, Negative, or Neutral with confidence scores Emotional Intelligence: Detects emotional tones like excitement, frustration, concern, or sarcasm Content Categorization: Identifies discussion types (questions, complaints, praise, debates) Toxicity Detection: Flags potentially harmful content with severity levels Community Insights: Analyzes engagement quality and trending concerns Actionable Intelligence: Provides moderation recommendations and response urgency levels Batch Processing: Efficiently processes multiple posts and their comments Structured JSON Output: Returns organized data ready for further analysis or integration How It Works The workflow follows a two-stage process: Data Collection: Fetches recent posts from specified subreddits along with their comments AI Analysis: Processes content through DeepSeek AI to generate comprehensive sentiment and contextual insights Use Cases Community Management: Monitor sentiment trends and identify posts requiring moderator attention Brand Monitoring: Track public opinion about your products or services on Reddit Market Research: Understand customer sentiment and concerns in relevant communities Content Strategy: Identify what type of content resonates positively with your audience Crisis Management: Quickly detect and respond to negative sentiment spikes Required Credentials Before setting up this workflow, you'll need to obtain the following credentials: Reddit OAuth2 API Go to Reddit App Preferences Click "Create App" or "Create Another App" Choose "web app" as the app type Fill in the required fields: Name: Your app name Description: Brief description of your app Redirect URI: http://localhost:8080/oauth/callback (or your n8n instance URL + /oauth/callback) Note down your Client ID and Client Secret OpenRouter API Visit OpenRouter Sign up for an account Navigate to your API Keys section Generate a new API key Copy the API key for use in n8n Step-by-Step Setup Instructions Step 1: Import the Workflow Copy the workflow JSON from this template In your n8n instance, click the "+" button to create a new workflow Select "Import from URL" or "Import from Clipboard" Paste the workflow JSON and click "Import" Step 2: Configure Reddit Credentials Click on any Reddit node (e.g., "Get many posts") In the credentials dropdown, click "Create New" Select "Reddit OAuth2 API" Enter your Reddit app credentials: Client ID: Your Reddit app client ID Client Secret: Your Reddit app client secret Auth URI: https://www.reddit.com/api/v1/authorize Access Token URI: https://www.reddit.com/api/v1/access_token Click "Connect my account" and authorize the app Save the credentials Step 3: Configure OpenRouter Credentials Click on the "OpenRouter Chat Model1" node In the credentials dropdown, click "Create New" Select "OpenRouter API" Enter your OpenRouter API key Save the credentials Step 4: Test the Webhook Click on the "Webhook" node Copy the webhook URL (it will look like: https://your-n8n-instance.com/webhook/reddit-sentiment) Test the webhook using a tool like Postman or curl with this sample payload: json { "subreddit": "technology", "query": "AI", "limit": 5 } Step 5: Customize the Analysis Modify the Structured Data Generator prompt: Edit the prompt in the "Structured Data Generator" node to adjust the analysis criteria or output format Change the AI model: In the "OpenRouter Chat Model1" node, you can switch to different models like anthropic/claude-3-haiku or openai/gpt-4 based on your preferences and budget Adjust post limits: Modify the limit parameter in the "Get many posts" and "Get many comments in a post" nodes to control how much data you process Usage Instructions Making API Calls Send a POST request to your webhook URL with the following parameters: Required Parameters: subreddit: The subreddit name (without r/) limit: Number of posts to analyze (recommended: 5-15) Optional Parameters: query: Search term to filter posts (optional) Example Request: bash curl -X POST https://your-n8n-instance.com/webhook/reddit-sentiment \ -H "Content-Type: application/json" \ -d '{ "subreddit": "CustomerService", "limit": 10 }' Understanding the Output The workflow returns a JSON array with detailed analysis for each post: json [ { "sentiment_analysis": { "overall_sentiment": { "category": "Negative", "confidence_score": 8 }, "emotional_tone": ["frustrated", "concerned"], "intensity_level": "High" }, "content_categorization": { "discussion_type": "Complaint", "key_themes": ["billing issues", "customer support"], "toxicity_level": { "level": "Low", "indicators": "No offensive language detected" } }, "contextual_insights": { "communityengagementquality": "Constructive", "potentialissuesflagged": ["service disruption"], "trending_concerns": ["response time", "resolution process"] }, "actionable_intelligence": { "moderatorattentionneeded": { "required": "Yes", "reason": "Customer complaint requiring company response" }, "response_urgency": "High", "suggestedfollowup_actions": [ "Escalate to customer service team", "Monitor for similar complaints" ] } } ] Workflow Nodes Explanation Data Collection Nodes Webhook: Receives API requests with subreddit and analysis parameters Get many posts: Fetches recent posts from the specified subreddit Split Out: Processes individual posts for analysis Get many comments in a post: Retrieves comments for each post Processing Nodes Loop Over Items: Manages batch processing of multiple posts Sentiment Analyzer: Primary AI analysis node that processes content Structured Data Generator: Formats AI output into structured JSON Code: Parses and cleans the AI response Respond to Webhook: Returns the final analysis results Customization Options Adjusting Analysis Depth Modify the limit parameters to analyze more or fewer posts/comments Update the AI prompts to focus on specific aspects (e.g., product mentions, competitor analysis) Adding Data Storage Connect database nodes to store analysis results for historical tracking Add email notifications for high-priority findings Integrating with Other Tools Connect to Slack/Discord for real-time alerts Link to Google Sheets for easy data visualization Integrate with CRM systems for customer feedback tracking Tips for Best Results Choose Relevant Subreddits: Focus on communities where your target audience is active Monitor Regularly: Set up scheduled executions to track sentiment trends over time Customize Prompts: Tailor the AI prompts to your specific industry or use case Respect Rate Limits: Reddit API has rate limits, so avoid excessive requests Review AI Output: Periodically check the AI analysis accuracy and adjust prompts as needed Troubleshooting Common Issues "Reddit API Authentication Failed" Verify your Reddit app credentials are correct Ensure your redirect URI matches your n8n instance Check that your Reddit app is set as "web app" type "OpenRouter API Error" Confirm your API key is valid and has sufficient credits Check that the selected model is available Verify your account has access to the chosen model "Webhook Not Responding" Ensure the workflow is activated Check that the webhook URL is correct Verify the request payload format matches the expected structure "AI Analysis Returns Errors" Review the prompt formatting in the Structured Data Generator Check if the selected AI model supports the required features Ensure the input data is not empty or malformed Performance Considerations Rate Limits: Reddit allows 60 requests per minute for OAuth applications AI Costs: Monitor your OpenRouter usage to manage costs Processing Time: Larger batches will take longer to process Memory Usage: Consider n8n instance resources when processing large datasets Contributing This workflow can be extended and improved. Consider adding: Support for multiple subreddits in a single request Historical sentiment tracking and trend analysis Integration with visualization tools Custom classification models for industry-specific analysis --- Ready to start analyzing Reddit sentiment? Import this workflow and start gaining valuable insights into online community discussions!