Generate AI images from text with Fire Flux model on Replicate API
This workflow provides automated access to the Fire Flux AI model through the Replicate API. It saves you time by eliminating the need to manually interact with AI models and provides a seamless integration for image generation tasks within your n8n automation workflows. Overview This workflow automatically handles the complete image generation process using the Fire Flux model. It manages API authentication, parameter configuration, request processing, and result retrieval with built-in error handling and retry logic for reliable automation. Model Description: The image generation model tailored for local development and personal use Key Capabilities High-quality image generation from text prompts Advanced AI-powered visual content creation Customizable image parameters and styles Tools Used n8n: The automation platform that orchestrates the workflow Replicate API: Access to the Fire/flux AI model Fire Flux: The core AI model for image generation Built-in Error Handling: Automatic retry logic and comprehensive error management How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Replicate API: Add your Replicate API token to the 'Set API Token' node Customize Parameters: Adjust the model parameters in the 'Set Image Parameters' node Test the Workflow: Run the workflow with your desired inputs Integrate: Connect this workflow to your existing automation pipelines Use Cases Content Creation: Generate unique images for blogs, social media, and marketing materials Design Prototyping: Create visual concepts and mockups for design projects Art & Creativity: Produce artistic images for personal or commercial use Marketing Materials: Generate eye-catching visuals for campaigns and advertisements Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Replicate API: https://replicate.com (Sign up to access powerful AI models) n8n automation ai replicate aiautomation workflow nocode imagegeneration aiart texttoimage visualcontent aiimages generativeart flux machinelearning artificialintelligence aitools automation digitalart contentcreation productivity innovation
Catch MailChimp subscribe events
Companion workflow for Mailchimp Trigger node docs
Automate meeting minutes distribution with Google Sheets and Gmail
Description This workflow sends a summary of your meeting minutes via Gmail, directly from the notes stored in your Google Sheet. Context Taking notes during meetings is important, but sharing them with the team can be time-consuming. This workflow makes it simple: just write down your meeting minutes in a Google Sheets, and n8n will automatically send them by email after each meeting. Who is this for? Perfect for anyone who: Uses Google Sheets to keep track of meeting notes. Wants to automatically share minutes with teammates or stakeholders. Values speed, productivity, and automation. Requirements Google account. Google Sheets (with your meeting minutes). You will need to setup the required columns first : Topic, Status, Owner, Next Step. Gmail. How it works ⏰ Trigger starts after a new row is added in your Google Sheet. 📑 The meeting minutes are retrieved from the sheet. 📨 Gmail automatically sends the minutes to the configured recipients. Steps 🗒️ Use the sticky notes in the n8n canvas to: Add your Google credentials (Sheets + Gmail). Define your sheet and recipient email addresses. Test the workflow to check if the minutes are sent. You’ll get this: An email containing your full meeting minutes, straight from your notes. Tutorial video Watch the Youtube Tutorial video About me : I’m Yassin a Project & Product Manager Scaling tech products with data-driven project management. 📬 Feel free to connect with me on Linkedin
Automated email management with Gemini AI: Gmail summarizing, labeling and Notion/Sheets logging
AI Email Manager: Auto Summary, Labeling, and CRM Logging via n8n + Gemini Overview This workflow turns your Gmail inbox into a fully autonomous AI Email Agent that reads, summarizes, categorizes, and organizes emails in real-time. Built with n8n, Google Gemini, Notion, and Google Sheets, it’s perfect for founders, freelancers, and agencies who receive a ton of emails daily and want to automate the triage process without losing control. How It Works Gmail Trigger — Detects new incoming emails. Process Email Data — Extracts sender info, subject, and content in a clean structured format. AI Email Analyzer — Uses Gemini AI to summarize the email and decide the most relevant label (e.g., Project Updates, Client Requests, Invoices, etc.). Create Gmail Label (if not exists) — Dynamically creates a new label if the AI recommends one that doesn’t exist. AI Agent + Add Label to Email — Applies the correct Gmail label automatically using the message ID. Logs in Notion & Google Sheets — Every processed email (summary, sender, date, label) is logged for tracking and analytics. Who It’s For Entrepreneurs & Founders — Manage investor, client, and product update emails automatically. Agencies & Teams — Classify and track client emails effortlessly across projects. Freelancers & Consultants — Get AI summaries and organize leads without manually labeling emails. Tech Builders — Anyone building AI automation tools and SaaS products around inbox management.
Personalized LinkedIn outreach with GPT-4O, PhantomBuster & Google Sheets
Overview This sophisticated n8n workflow transforms raw LinkedIn leads into personalized, high-converting connection requests using GPT-4O AI and PhantomBuster automation. The system processes LinkedIn profile data, generates authentic icebreakers, and automatically sends connection requests twice daily, creating a hands-off lead generation machine that maintains human authenticity while scaling outreach efforts. Key Benefits 🤖 AI-Powered Personalization Generate unique, human-like icebreakers for every LinkedIn connection request using GPT-4O, ensuring each outreach feels personal and authentic rather than automated. ⚡ Automated Workflow Execution Run your entire lead generation pipeline automatically twice daily (10 AM and 5 PM) USA Time with zero manual intervention required. 📊 Smart Data Management Seamlessly manage leads across multiple Google Sheets with automatic data cleanup, duplicate prevention, and organized lead tracking. 🎯 PhantomBuster Integration Leverage PhantomBuster's powerful LinkedIn automation to send connection requests at scale while maintaining platform compliance. 📈 Scalable Processing Process leads in optimized batches of 10 to maintain quality while scaling your outreach efforts effectively and remain within LinkedIn’s weekly connection request limits. 📧 Real-Time Monitoring Receive email notifications whenever connection requests are sent, keeping you informed of your campaign progress. 🔄 Continuous Operation Self-maintaining system that processes new leads, cleans up completed tasks, and prepares for the next cycle automatically. 💼 Professional Template System Uses proven icebreaker templates that follow the format: "Hey [name], loved seeing [personalized detail]. I'm also into [relevant connection], thought I'd connect." How It Works Phase 1: Lead Acquisition & Processing The workflow begins with scheduled triggers that activate twice daily. Upon activation, the system first cleans up previously processed leads from the source Google Sheet to prevent duplicates. It then retrieves fresh LinkedIn profile data including names, titles, company information, locations, and profile URLs. Phase 2: AI-Powered Personalization Engine Retrieved leads are processed in batches of 10 through a sophisticated GPT-4O integration. The AI analyzes each LinkedIn profile and generates personalized icebreakers following a proven template structure. The system is specifically programmed to paraphrase LinkedIn information rather than copy it directly, ensuring messages feel human-written rather than automated. Phase 3: Data Storage & Campaign Launch Processed leads with their AI-generated icebreakers are stored in a dedicated Google Sheet for tracking and analysis. The system then aggregates all processed data and triggers a PhantomBuster agent that executes the actual LinkedIn connection requests using the personalized messages. Phase 4: Cleanup & Notification After successful campaign launch, the system removes processed leads from the source sheet, sends email confirmation notifications, and prepares for the next scheduled execution cycle. Required Setup & Dependencies Core Integrations: Google Sheets API access with OAuth2 authentication OpenAI API key for GPT-4O access PhantomBuster account with API key and configured LinkedIn agent Gmail account for notifications Google Sheets Structure: Source Sheet: Contains raw LinkedIn data (firstName, lastName, title, companyName, location, etc.) Destination Sheet: Stores processed leads with icebreakers and tracking information PhantomBuster Configuration: LinkedIn connection request automation agent Proper agent ID configuration in the HTTP request node Valid API key with sufficient credits Business Use Cases Sales Development Representatives (SDRs) Automate personalized outreach to potential clients while maintaining the human touch that drives connection acceptance rates. Recruitment Agencies Scale candidate outreach with personalized messages that reference specific experience and skills from LinkedIn profiles. Business Development Generate partnerships and collaboration opportunities through targeted, personalized connection requests to industry leaders. Coaches & Consultants Build professional networks by connecting with potential clients using AI-generated icebreakers that reference their specific challenges and opportunities. Marketing Agencies Develop client relationships through personalized outreach that demonstrates understanding of their business and industry. Revenue Potential Direct Lead Generation: Process 20 leads daily (10 per execution × 2 runs) Average 25% connection acceptance rate = 5 new connections daily Convert 10% of connections to qualified leads = 15 qualified leads monthly Close 20% of qualified leads at $2,000 average deal size = $6,000 monthly revenue Agency Services: Offer as white-label service to clients at $500-1,500 monthly recurring revenue per client Manage 10-20 client accounts for $5,000-30,000 monthly recurring revenue SaaS Model: Package as LinkedIn automation SaaS with tiered pricing ($49-299/month) Target 100+ subscribers for $5,000-30,000 monthly recurring revenue Difficulty Level & Build Time Difficulty: Intermediate to Advanced Estimated Build Time: 4-6 hours Technical Requirements: Understanding of API integrations, Google Sheets operations, and basic workflow logic Setup Complexity: API key management and authentication setup Google Sheets structure creation and permission configuration PhantomBuster agent setup and testing AI prompt engineering for optimal icebreaker generation Detailed Setup Steps Google Sheets Preparation Create two Google Sheets: Source Sheet: Structure with columns for firstName, lastName, location, title, companyName, titleDescription, linkedInProfileUrl Destination Sheet: Include all source columns plus id, photourl, icebreaker, emailstatus fields API Credentials Configuration OpenAI: Generate API key with GPT-4O access Google Sheets: Set up OAuth2 credentials in n8n PhantomBuster: Create account, set up LinkedIn connection agent, obtain API key Gmail: Configure OAuth2 for notification emails PhantomBuster Agent Setup Create LinkedIn connection request automation agent Configure with proper message templates and targeting parameters Test agent functionality and note the agent ID for n8n configuration Workflow Import & Configuration Import the provided n8n workflow JSON Update all credential references to match your configured accounts Modify Google Sheet IDs in all relevant nodes Update PhantomBuster agent ID and API key in HTTP request node AI Prompt Optimization Review and customize the GPT-4O prompt for your specific use case Test icebreaker generation with sample data Adjust tone and style parameters as needed Schedule Configuration Set appropriate trigger times based on your target timezone Consider LinkedIn usage patterns for optimal engagement Testing & Validation Run workflow manually with test data Verify Google Sheets integration and data flow Test PhantomBuster integration with small batch Confirm email notifications are working Advanced Customization Options Enhanced AI Personalization Integrate additional data sources (company websites, news articles) for richer context Add industry-specific icebreaker templates Implement A/B testing for message variations CRM Integration Connect to Salesforce, HubSpot, or Pipedrive for seamless lead management Add lead scoring based on profile analysis Implement automated follow-up sequences Analytics & Reporting Add detailed tracking and analytics dashboard Implement conversion tracking from connection to closed deal Generate automated performance reports Multi-Platform Expansion Extend to Twitter/X and Instagram outreach Add email finder integration for multi-channel campaigns Implement unified contact management across platforms Advanced Filtering Add AI-powered lead qualification before outreach Implement company size, industry, and role-based filtering Add sentiment analysis for optimal timing This workflow represents a complete, production-ready solution that can immediately start generating leads and revenue while providing a foundation for advanced customization and scaling.
Automate GitHub trending data collection with FireCrawl, GPT and Supabase
GitHub Trending to Supabase (Daily, Weekly, Monthly) Who is this for? This workflow is for developers, researchers, founders, and data analysts who want a historical dataset of GitHub Trending repositories without manual scraping. It’s ideal for building dashboards, newsletters, or trend analytics on top of a clean Supabase table. What problem is this workflow solving? Checking GitHub Trending by hand (daily/weekly/monthly) is repetitive and error-prone. This workflow automates collection, parsing, and storage so you can reliably track changes over time and query them from Supabase. What this workflow does Scrapes GitHub Trending across Daily, Weekly, and Monthly timeframes using FireCrawl. Extracts per-project fields: name, url, description, language, stars. Adds a type dimension (daily / weekly / monthly) to each row. Inserts structured results into a Supabase table for long-term storage. Setup Ensure you have an n8n instance (Cloud or self-hosted). Create credentials: FireCrawl API credential (no hardcoded keys in nodes). Supabase credential (URL + Service Role / Insert-capable key). Prepare a Supabase table (example): sql CREATE TABLE public.githubtrending ( id bigint GENERATED ALWAYS AS IDENTITY NOT NULL, created_at timestamp with time zone NOT NULL DEFAULT now(), data_date date DEFAULT now(), url text, project_id text, project_desc text, code_language text, stars bigint DEFAULT '0'::bigint, type text, CONSTRAINT githubtrending_pkey PRIMARY KEY (id) ); Import this workflow JSON into n8n. Run once to validate, then schedule (e.g., daily at 08:00).
Real-time sales quote creation in Odoo via Telegram with Google Gemini AI
Overview This template connects Telegram with Odoo to let your sales team create sales quotes and check product availability in real-time — just by sending chat messages. It’s designed for sales representatives, distributors, and small business owners who want to manage quotes and product information quickly without logging into Odoo. ⚙️ How It Works Once configured, this workflow listens to your Telegram bot for incoming messages. Based on the message text, it performs different actions in Odoo: Product Queries Sales reps can ask about products directly in Telegram: “What’s the price of Product B?” “How many units of Product A are available?” The workflow fetches real-time data from Odoo and replies instantly. Sales Quote Creation Sales reps can also create new sales quotes by typing messages like: “My customer Amazon, his email address is abc@amazon.com wants to buy 10 pcs of Product A and 15 pcs of Product B.” The workflow extracts relevant details, creates a sales quote in Odoo, and sends confirmation back in Telegram. 🧰 Setup Instructions Create a Telegram Bot Go to @BotFather on Telegram. Create a new bot and copy the API Token. Prepare Odoo Enable the Sales and Product modules. Generate an API Key from your Odoo user account. Note your Odoo URL (e.g., https://yourcompany.odoo.com). Import Workflow Open your n8n instance (self-hosted or cloud). Click Import Workflow and upload the provided JSON file. Add Credentials Configure your Telegram credentials (Bot Token). Configure your Odoo credentials (Base URL + API Key). Activate the Workflow Set the workflow to active to start listening for Telegram messages. Send a sample message to your bot to test. 🧠 Use Cases Sales reps capturing orders in the field SMEs managing customer inquiries directly from Telegram Real-time price and stock lookups without opening Odoo Automation of repetitive sales quote tasks 🎛️ Customization Options This workflow can be easily adapted to your business needs: Change trigger platform: Replace Telegram with WhatsApp, Slack, or Discord using the respective n8n nodes. Extend data fields: Add fields like delivery date, salesperson, or payment terms. Auto-confirm orders: Add a node to automatically confirm a Sales Quote once approved. ✅ Requirements Odoo v14 or later (with Sales module enabled) Telegram Bot Token n8n instance (Cloud or self-hosted) 💬 Example Prompts Product Query: “What’s the price of Product B?” “How many units of Product A are available?” Order Entry: “My customer Amazon, his email address is abc@amazon.com wants to buy 10 pcs of Product A and 15 pcs of Product B.”
AI-powered document search with Oracle and ONNX embeddings for recruiting
How it works Create a user for doing Hybrid Search. Clear Existing Data, if present. Add Documents into the table. Create a hybrid index. Run Semantic search on the Documents table for "prioritize teamwork and leadership experience". Run Hybrid search for the text input in the Chat interface on the Documents table. Setup Steps Download the ONNX model allMiniLML12v2augmented.zip Extract the ZIP file on the database server into a directory, for example /opt/oracle/onnx. After extraction, the folder contents should look like: bash bash-4.4$ pwd /opt/oracle/onnx bash-4.4$ ls allMiniLML12_v2.onnx Connect as SYSDBA and create the DBA user sql -- Create DBA user CREATE USER app_admin IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; -- Grant privileges GRANT DBA TO app_admin; GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE TO app_admin; Create n8n Oracle DB credentials hybridsearchuser → for hybrid search operations dbadocuser → for DBA setup (user and tablespace creation) Run the workflow Click the manual Trigger It displays Pure semantic search results. Enter search text in Chat interface It displays results for vector and keyword search. Note The workflow currently creates the hybrid search user, docuser with the password visible in plain text inside the n8n Execute SQL node. For better security, consider performing the user creation manually outside n8n. Oracle 23ai or 26ai Database has to be used. Reference Hybrid Search End-End Example
Generate AI-powered markdown posts from workflow JSON with Gemini & LlamaIndex
This AI-powered workflow transforms n8n workflow JSON files into publication-ready, SEO-optimized markdown posts for the n8n community. Simply upload your workflow's JSON, and let Google Gemini 2.5 Pro, guided by a LlamaIndex-powered knowledge base of best practices, automatically generate compelling content. Why Use This Workflow? Time Savings: Reduces the time to create a detailed workflow post from over an hour of manual writing to under 2 minutes. Cost Reduction: Eliminates the need for separate AI content subscriptions or outsourcing content creation tasks. Error Prevention: Enforces content quality and structural consistency by using a knowledge base of n8n's official guidelines, minimizing formatting errors. Ideal For n8n Workflow Creators: To quickly document and share their creations on the community platform without the tedious, time-consuming writing process. Developer Advocates: To standardize and accelerate the production of technical tutorials and workflow showcases. Content & Marketing Teams: To streamline the content pipeline for n8n-related blog posts, tutorials, and community engagement initiatives. How It Works Trigger: The process starts when you upload an n8n workflow JSON file via a simple web form. Data Extraction: The workflow automatically extracts the JSON content from the uploaded file. Intelligence Layer: An advanced AI agent, powered by Google Gemini 2.5 Pro, analyzes the structure, nodes, and metadata of your workflow. Knowledge Retrieval: The agent consults a specialized, in-memory knowledge base built from n8n's content guidelines. This knowledge base is created by parsing documents with LlamaIndex and refined with a Cohere Reranker for maximum accuracy. Content Generation: The AI agent synthesizes the technical details from your JSON with the best practices from the knowledge base to write a complete, benefit-driven markdown post. Output & Delivery: The final, polished markdown content is generated as the workflow's output, ready to be copied and pasted into the n8n community platform. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Google Gemini API Key | Essential | Powers the core AI content generation | | LlamaIndex Cloud API Key | Essential | Parses documents for the knowledge base | | Cohere API Key | Optional | Improves knowledge base search results | | Google Drive Account | Optional | For automatically updating the knowledge base from a Google Doc | Installation Steps Import the JSON file to your n8n instance. Configure credentials: Google Gemini: In the "GEmini 2.5 pro" node, create and add your Google Gemini API credential. LlamaIndex: In the three HTTP Request nodes named "Parse Document...", "Monitor Document...", and "Retrieve Parsed...", create an HTTP Header Auth credential. The header name is Authorization and the value is Bearer YOURLLAMAINDEXAPIKEY. Cohere: (Optional) In the "Reranker Cohere" node, create and add your Cohere API credential. Google Drive: (Optional) If you plan to auto-update the knowledge base, configure Google Drive OAuth2 credentials for the "Knowledge Base Updated Trigger" and "Download Knowledge Document" nodes. Update environment-specific values: To use the knowledge base auto-update feature, go to the "Knowledge Base Updated Trigger" node and select the Google Drive file containing your content guidelines. Customize settings: The primary system prompt in the "n8ncreator" agent node can be modified to adjust the tone, style, or structure of the generated content. Test execution: Run the workflow manually and use the form to upload a sample n8n workflow JSON file to verify that all connections work correctly. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Form Trigger | Initiates the workflow via a file upload. | Set the "Input Json Workflow" field to required. | | Langchain Agent | Orchestrates the entire content creation process. | The system prompt contains all instructions for the AI. | | ChatGoogleGemini | Provides the core generative AI capabilities. | Select your Gemini model of choice (e.g., gemini-2.5-pro). | | VectorStoreInMemory | Acts as the agent's knowledge base tool. | Configured to use embeddings from a Google Gemini model. | | HTTPRequest | Interacts with the LlamaIndex API to parse documents. | Set up with LlamaIndex API endpoint and authentication. | Customization Options Basic Adjustments: Change AI Model: Replace the ChatGoogleGemini node with another LLM node (e.g., OpenAI, Anthropic) to use a different provider. Adjust System Prompt: Modify the prompt in the "n8ncreator" node to tailor the output for different platforms (e.g., blog, internal wiki) or change the writing style. Advanced Enhancements: Automated Publishing: Connect the output of the "n8ncreator" node to a Ghost, WordPress, or GitHub node to automatically publish the generated post. Add Web Search: Equip the Langchain Agent with a web search tool to allow it to fetch live information about new n8n nodes or services. Batch Processing: Replace the Form Trigger with a Read Binary Files node to process an entire folder of workflow JSON files in a single run. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|---------------------|-------------------| | Execution time | ~1 minute per run | Largely dependent on the Gemini API response time. | | API calls | 1 LLM call per post | Knowledge base updates trigger LlamaIndex/Google calls separately. | | Error handling | Built-in retry logic for document parsing | Add an error workflow path after the "n8ncreator" node to handle AI generation failures. | Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | AI output is generic or incomplete | The input JSON file is invalid or lacks key information (e.g., no node names). | Ensure you are uploading a valid, exported n8n workflow JSON. Verify the workflow has been saved with descriptive node names. | | LlamaIndex parsing fails | The LlamaIndex API key is incorrect or the source document is inaccessible. | Double-check your LlamaIndex API credential. Ensure the Google Doc sharing settings allow access. | | Credential Error | API keys are missing or incorrect for Gemini, LlamaIndex, or Cohere. | Go to the specified nodes and verify that the correct credentials have been created and selected. | Created by: khaisa Studio Category: AI Tags: AI, Content Generation, Google Gemini, LlamaIndex, Automation Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
Daily Solana news tracker with GPT-4.1-mini weekly summaries in Google Sheets
Automated Solana News Tracker with AI-Powered Weekly Summaries Never miss important Solana ecosystem updates again. This production-ready workflow automatically scrapes crypto news daily, intelligently filters duplicates, stores everything in Google Sheets, and generates AI-powered weekly summaries every Monday—completely hands-free. 🎯 What It Does: This intelligent automation runs on autopilot to keep you informed about Solana developments without manual monitoring. Every day at 8 AM PT, it fetches the latest Solana news from CryptoPanic, checks for duplicates against your existing database, and stores only new articles in Google Sheets. On Mondays, it takes an extra step: reading all accumulated articles from the past week and using GPT-4.1-mini to generate a concise, factual summary of key developments and investor takeaways. Daily News Collection: Automatically fetches latest Solana articles from CryptoPanic API Smart Duplicate Detection: Compares incoming articles against existing database to prevent redundancy Data Validation: Filters out incomplete articles to ensure data quality Organized Storage: Maintains clean Google Sheets database with timestamps and descriptions Weekly AI Summaries: Analyzes accumulated news every Monday and generates 2-3 sentence insights Historical Archive: Builds searchable database of both raw articles and weekly summaries 💼 Perfect For: Crypto traders tracking market-moving news • SOL investors monitoring ecosystem growth • Blockchain researchers building historical datasets • Content creators sourcing newsletter material • Portfolio managers needing daily briefings • Anyone wanting Solana updates without information overload 🔧 How It Works: The workflow operates in two distinct modes based on the day of the week. During the daily collection phase (Tuesday-Sunday), it runs at 8 AM PT, fetches the latest Solana news from CryptoPanic, formats the data to extract titles, descriptions, and timestamps, checks each article against your Google Sheets database to identify duplicates, filters out any articles that already exist or have missing data, and appends only valid new articles to your "Raw Data" sheet. On Mondays, the workflow performs all daily tasks plus an additional summarization step. After storing new articles, it retrieves all accumulated news from the "Raw Data" sheet, aggregates all article descriptions into a single text block, sends this consolidated information to GPT-4.1-mini with instructions to create a factual, spartan-toned summary highlighting key investor takeaways, and saves the AI-generated summary with a timestamp to the "Weekly Summary" sheet for historical reference. ✨ Key Features: Schedule-based execution: Runs automatically at 8 AM PT every day without manual intervention Intelligent deduplication: Title-based matching prevents storing the same article multiple times Data quality control: Validates required fields before storage to maintain clean dataset Dual-sheet architecture: Separate sheets for raw articles and weekly summaries for easy access Cost-effective AI: Uses GPT-4.1-mini (~$0.001 per summary) for extremely low operating costs Scalable storage: Google Sheets handles thousands of articles with free tier Customizable cryptocurrency: Easily adapt to track Bitcoin, Ethereum, or any supported coin Flexible scheduling: Modify trigger time and summary frequency to match your needs 📋 Requirements: CryptoPanic account with free API key (register at cryptopanic.com) Google Sheets with two sheets: "Raw Data" (columns: date, title, descripton, summary) and "Weekly Summary" (columns: Date, Summary) OpenAI API key for GPT-4.1-mini access (~$0.05/month cost) n8n Cloud or self-hosted instance with schedule trigger enabled ⚡ Quick Setup: Register for a free CryptoPanic API key and replace [your token] in the "Get Solana News" HTTP Request node URL. Create a new Google Spreadsheet with two sheets: one named "Raw Data" with columns for date, title, descripton (note the typo in template), and summary; another named "Weekly Summary" with columns for Date and Summary. Connect your Google Sheets OAuth2 credential to all Google Sheets nodes in the workflow. Add your OpenAI API credential to the "Summarize News" node. Test the workflow manually to ensure it fetches news and stores it correctly. Activate the workflow to enable daily automatic execution. 🚨 Please note, that you're not able to get news in real-time with a FREE CryptoPanic API. Consider their pro plan or another platform for real-time news scraping You'll get new that's up to date as of yesterday. 🎁 What You Get: Complete end-to-end automation with concise sticky note documentation at each workflow stage, pre-configured duplicate detection logic, AI summarization with investor-focused prompts optimized for factual analysis without hype, dual-sheet Google Sheets structure for raw data and summaries, flexible schedule trigger you can adjust to any timezone, example data in pinned format showing expected API responses, customization guides for different cryptocurrencies and summary frequencies, and troubleshooting checklist for common setup issues. 💰 Expected Costs & Performance: CryptoPanic API is free with reasonable rate limits for personal use. OpenAI GPT-4.1-mini costs approximately $0.001 per summary, totaling about $0.05 per month for weekly summaries. The workflow typically processes 20-50 articles daily and generates one summary weekly from 140-350 accumulated articles. Daily executions complete in 5-10 seconds, while Monday runs with AI summarization take 15-20 seconds. Google Sheets provides free storage for up to 5 million cells, easily handling years of news data. 🔄 Customization Ideas: Track different cryptocurrencies by changing the currencies parameter (btc, eth, ada, doge, etc.). Adjust the schedule trigger to run at different times matching your timezone. Modify the Monday check condition to generate summaries on different days or multiple times per week. Connect Slack, Discord, or Email nodes to receive instant notifications when summaries are generated. Edit the AI prompt to change tone, detail level, or focus on specific aspects like price action, development updates, or partnerships. Add conditional logic to send alerts only when certain keywords appear in news (like "hack," "partnership," or "upgrade").
🛠️ ProfitWell tool MCP server 💪 both operations
Complete MCP server exposing all ProfitWell Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every ProfitWell Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n ProfitWell Tool tool with full error handling 📋 Available Operations (2 total) Every possible ProfitWell Tool operation is included: 🔧 Company (1 operations) • Get settings for your company 🔧 Metric (1 operations) • Get a metric 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native ProfitWell Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every ProfitWell Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
Control AI agent tool access with Port RBAC and Slack mentions
RBAC for AI agents with n8n and Port This workflow implements role-based access control for AI agent tools using Port as the single source of truth for permissions. Different users get access to different tools based on their roles, without needing a separate permission database. For example, developers might have access to PagerDuty and AWS S3, while support staff only gets Wikipedia and a calculator. The workflow checks each user's permissions in Port before letting the agent use any tools. For the full guide with blueprint setup and detailed configuration, see RBAC for AI Agents with n8n and Port in the Port documentation. How it works The n8n workflow orchestrates the following steps: Slack trigger — Listens for @mentions and extracts the user ID from the message. Get user profile — Fetches the user's Slack profile to get their email address. Port authentication — Requests an access token from the Port API using client credentials. Permission lookup — Queries Port for the user entity (by email) and reads their allowed_tools array. Unknown user check — If the user doesn't exist in Port, sends an error message and stops. Permission filtering — The "Check permissions" node compares each connected tool against allowed_tools and replaces unauthorized ones with a stub that returns "You are not authorized to use this tool." AI agent — Runs with only permitted tools, using GPT-4 and chat memory. Response — Posts the agent output back to the Slack channel. Setup [ ] Connect your Slack account and set the channel ID in the trigger node [ ] Add your OpenAI API key [ ] Register for free on Port.io [ ] Create the rbacUser blueprint in Port (see full guide for blueprint setup) [ ] Add user entities using email as the identifier [ ] Replace YOURPORTCLIENTID and YOURPORTCLIENTSECRET in the "Get Port access token" node [ ] Connect credentials for any tools you want to use (PagerDuty, AWS, etc.) [ ] Update the channel ID in the Slack nodes [ ] Invite the bot to your Slack channel [ ] You should be good to go! Prerequisites You have a Port account and have completed the onboarding process. You have a working n8n instance (self-hosted) with LangChain nodes available. Slack workspace with bot permissions to receive mentions and post messages. OpenAI API key for the LangChain agent. Port client ID and secret for API authentication. (Optional) PagerDuty, AWS, or other service credentials for tools you want to control. ⚠️ This template is intended for Self-Hosted instances only.