Website scam risk detector with GPT-4o and SerpAPI
What It Does This intelligent workflow simplifies the complex task of determining whether a website is legitimate or potentially a scam. By simply submitting a URL through a form, the system initiates a multi-agent evaluation process. Four dedicated AI agents—each powered by GPT-4o and connected to SerpAPI—analyze different dimensions of the website: domain and technical details, search engine signals, product and pricing patterns, and on-site content analysis. Their findings are then passed to a fifth AI agent, the Analyzer, powered by GPT-4o mini, which consolidates the data, scores the site on a scale of 1–10 for scam likelihood, and presents the findings in a clear, structured format for the user. Who It's For This workflow is ideal for anyone who needs to quickly and reliably assess the trustworthiness of a website. Whether you're a consumer double-checking a store before making a purchase, a small business owner validating supplier sites, a cybersecurity analyst conducting threat assessments, or a developer building fraud detection into your platform — this tool offers fast, AI-powered insights without the need for manual research or technical expertise. It's designed for both individuals and teams who value accurate, scalable scam detection. How It Works The process begins with a simple form submission where the user enters the URL of the website they want to investigate. Once submitted, the workflow activates four specialized AI agents—each powered by GPT-4o and connected to SerpAPI—to independently analyze the site from different angles: Agent 1 examines domain age, SSL certificates, and TLD trustworthiness. Agent 2 reviews search engine results, forum mentions, and public scam reports. Agent 3 analyzes product pricing patterns and brand authenticity. Agent 4 assesses on-site content quality, grammar, legitimacy of claims, and presence of business info. Each agent returns its findings, which are then aggregated and passed to a fifth AI agent—the Analyzer. This final agent, powered by GPT-4o mini, evaluates all the input, assigns a scam likelihood score from 1 to 10, and compiles a neatly formatted summary with organized insights and a disclaimer for context. Set UP You will need to obtain an Open AI API key from platform.openai.com/api-keys After you obtain this Open AI API key you will need to connect it to the Open AI Chat Model for all of the Tools agents (Analyzer, Domain & Technical Details, Search Engine Signals, Product & Pricing Patterns, and Content Analysis Tools Agents). You will now need to fund your Open AI account. GPT 4o costs ~$0.01 to run the workflow. Next you will need to create a SerpAPI account at https://serpapi.com/users/sign_up After you create an account you will need to obtain a SerpAPI key. You will then need to use this key to connect to the SerpAPI tool for each of the tools agents (Domain & Technical Details, Search Engine Signals, Product & Pricing Patterns, and Content Analysis Tools Agents) Tip: SerpAPI will allow you to run 100 free searches each month. This workflow uses ~5-15 SerpAPI searches per run. If you would like to utilize the workflow more than that each month, create multiple SerpAPI accounts and have an API key for each account. When you utilize all 100 free searches for an account, switch to the API key for another account within the workflow. Disclaimer This tool is designed to assist in evaluating the potential risk of websites using AI-generated insights. The scam likelihood score and analysis provided are based on publicly available information and should not be considered a definitive or authoritative assessment. This tool does not guarantee the accuracy, safety, or legitimacy of any website. Users should perform their own due diligence and use independent judgment before engaging with any site. N8N, OpenAI, its affiliates, and the creators of this workflow are not responsible for any loss, damages, or consequences arising from the use of this tool or the actions taken based on its results.
Flux dev image generation (Fal.ai) to Google Drive
This workflow automates AI-based image generation using the Fal.ai Flux API. Define custom prompts, image parameters, and effortlessly generate, monitor, and save the output directly to Google Drive. Streamline your creative automation with ease and precision. Who is this for? This template is for content creators, developers, automation experts, and creative professionals looking to integrate AI-based image generation into their workflows. It’s ideal for generating custom visuals with the Fal.ai Flux API and automating storage in Google Drive. What problem is this workflow solving? Manually generating AI-based images, checking their status, and saving results can be tedious. This workflow automates the entire process — from requesting image generation, monitoring its progress, downloading the result, and saving it directly to a Google Drive folder. What this workflow does Sets Custom Image Parameters: Allows you to define the prompt, resolution, guidance scale, and steps for AI image generation. Sends a Request to Fal.ai: Initiates the image generation process using the Fal.ai Flux API. Monitors Image Status: Checks for completion and waits if needed. Downloads the Generated Image: Fetches the completed image once ready. Saves to Google Drive: Automatically uploads the generated image to a specified Google Drive folder. Setup Prerequisites: • Fal.ai API Key: Obtain it from the Fal.ai platform and set it as the Authorization header in HTTP Header Auth credentials. • Google Drive OAuth Credentials: Connect your Google Drive account in n8n. Configuration: • Update the “Edit Fields” node with your desired image parameters: • Prompt: Describe the image (e.g., “Thai young woman net idol 25 yrs old, walking on the street”). • Width/Height: Define image resolution (default: 1024x768). • Steps: Number of inference steps (e.g., 30). • Guidance Scale: Controls image adherence to the prompt (e.g., 3.5). • Set your Google Drive folder ID in the “Google Drive” node to save the image. Run the Workflow: • Trigger the workflow manually to generate the image. • The workflow waits, checks status, and saves the final output seamlessly. Customization • Modify Image Parameters: Adjust the prompt, resolution, steps, and guidance scale in the “Edit Fields” node. • Change Storage Location: Update the Google Drive node with a different folder ID. • Add Notifications: Integrate an email or messaging node to alert you when the image is ready. • Additional Outputs: Expand the workflow to send the generated image to Slack, Dropbox, or other platforms. This workflow streamlines AI-based image generation and storage, offering flexibility and customization for creative automation.
Create LinkedIn contributions with AI and notify users on Slack
This workflow automates the process of gathering LinkedIn advice articles, extracting their content, and generating unique contributions for each article using an AI model. The contributions are then posted to a Slack channel and a NocoDB database for record-keeping. The workflow is triggered weekly to ensure new articles are continuously collected and responded to. Who is this for? This workflow is designed for professionals, marketers, and content creators looking to boost their LinkedIn presence by regularly engaging with LinkedIn advice articles. It’s especially useful for those who want to be seen as a "thought leader" or "top voice" in their niche by contributing relevant and unique advice to trending topics. What problem is this workflow solving? Manually searching for relevant LinkedIn articles, reading through them, and crafting thoughtful contributions can be time-consuming. This workflow solves that by automating the process of finding new articles, extracting key content, and generating AI-powered contributions. It helps users stay consistently active on LinkedIn, contributing value to trending discussions. What this workflow does Triggers Weekly: The workflow is set to run every Monday at 8:00 AM. Search Google for LinkedIn Advice Articles: Uses a predefined Google search URL to find the latest LinkedIn advice articles based on the user's area of expertise. Extract LinkedIn Article Links: A code node extracts all LinkedIn advice article links from the search results. Retrieve Article Content: For each article link, the workflow retrieves the HTML content and extracts the article title, topics, and existing contributions. Generate AI-Powered Contributions: The workflow sends the extracted article content to an AI model, which generates unique, helpful advice for each topic within the article. Post to Slack & NocoDB: The AI-generated contributions, along with the article links, are posted to a designated Slack channel and stored in a NocoDB database for future reference. Setup Google Search URL: Update the Google search URL with the relevant LinkedIn advice query for your field (e.g., "site:linkedin.com/advice 'marketing automation'"). Slack Integration: Connect your Slack account and specify the Slack channel where you want the contributions to be posted. NocoDB Integration: Set up your NocoDB project to store the generated contributions along with the article titles and links. How to customize this workflow Change Search Terms: Modify the Google search URL to focus on a different LinkedIn topic or expertise area. Adjust Trigger Frequency: The workflow is set to run weekly, but you can adjust the frequency by changing the schedule trigger. Enhance Contribution Quality: Customize the AI model's prompt to generate contributions that align with your brand voice or content strategy. Workflow Summary This workflow helps users maintain a consistent presence on LinkedIn by automating the discovery of new advice articles and generating unique contributions using AI. It is ideal for professionals who want to engage with LinkedIn content regularly without spending too much time manually searching and drafting responses.
Indeed job scraper with AI filtering & company research using Apify and Tavily
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow scrapes job listings on indeed via Apify, automatically gets that dataset, extracts information about the listing filters jobs off relevance, finds a decision maker at the company and updates a database (google sheets) with that info for outreach. All you need to do is run Apify actor then the database will update with the processed data. Benefits: Complete Job search Automation - A webhook monitors the Apify actor which sends a integration and starts the process AI-Powered Filter - Uses ChatGPT to analyze content/context, identify company goals, and filters based on job description Smart Duplicate Prevention - Automatically tracks processed job listings in a database to avoid redundancy Multi-Platform Intelligence - Combines Indeed scraping, web research via Tavily, and enriches each listing Niche Focus - Process content from multiple niches 6 currently (hardcoded) but can be changed to fit other niches (just prompt the "job filter" node) How It Works: Indeed Job Discovery: Search and apply filter for relevant job listings, copy and use URL in Apify Uses Apify's Indeed job scraper to scrape job listings from the URL of interest Automatically scrapes the information, stores it in a dataset and initiates a integration Oncoming Data Processing: Loops over 500 items (can be changed) with a batch size of 55 items (can be changed) to avoid running into API timeouts. Multiple filters to ensure all fields are scrapped with our required metrics (website must exist and number of employees < 250) Duplicate job listings are removed from oncoming batch to be processed Job Analysis & Filter: An additional filter to remove any job listing from the oncoming batch if it already exists in the google sheets database Then all new job listings gets pasted to chatGPT which uses information about the job post/description to determine if it is relevant to us All relevant jobs get a new field "verdict" which is either true or false and we keep the ones where verdict is true Enrich & Update Database: Uses Tavily to search for a decision maker (doesn't always finds one) and populate a row in google sheet with information about the job listing, the company and a decision maker at that company. Waits for 1 minute and 30 seconds to avoid google sheets and chatGPT API timeouts then loops back to the next batch to start filtering again until all job listings are processed Required Google Sheets Database Setup: Before running this workflow, create a Google Sheets database with these exact column headers: Essential Columns: jobUrl - Unique identifier for job listings title - Position Title descriptionText - Description of job listing hiringDemand/isHighVolumeHiring - Are they hiring at high volume? hiringDemand/isUrgentHire - Are they hiring at high urgency? isRemote - Is this job remote? jobType/0 - Job type: In person, Remote, Part-time, etc. companyCeo/name - CEO name collected from Tavily's search icebreaker - Column for holding custom icebreakers for each job listing (Not completed in the workflow. I will upload another that does this called "Personalized IJSFE") scrapedCeo - CEO name collected from Apify Scraper email - Email listed on for job listing companyName - Name of company that posted the job companyDescription - Description of the company that posted the job companyLinks/corporateWebsite - Website of the company that posted the job companyNumEmployees - Number of employees the company listed that they have location/country - Location of where the job is to take place salary/salaryText - Salary on job listing Setup Instructions: Create a new Google Sheet with these column headers in the first row Name the sheet whatever you please Connect your Google Sheets OAuth credentials in n8n Update the document ID in the workflow nodes The merge logic relies on the id column to prevent duplicate processing, so this structure is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my gmail: terflix45@gmail.com and I'll get back to you as soon as I can. Set Up Steps: Configure Apify Integration: Sign up for an Apify account and obtain API key Get indeed job scraper actor and use Apify's integration to send a HTTP request to your n8n webhook (if test URL doesn't work use production URL) Use Apify node with Resource: Dataset, Operation: Get items and use your Api key as your credentials Set Up AI Services: Add OpenAI API credentials for job filtering Add Tavily API credentials for company research Set up appropriate rate limiting for cost control Database Configuration: Create Google Sheets database with provided column structure Connect Google Sheets OAuth credentials Configure the merge logic for duplicate detection Content Filtering Setup: Customize the AI prompts for your specific niche, requirements or interest Adjust the filtering criteria to fit your needs
Notify user in Slack of quarantined email and create Jira ticket if opened
This n8n workflow serves as an incident response and notification system for handling potentially malicious emails flagged by Sublime Security. It begins with a Webhook trigger that Sublime Security uses to initiate the workflow by POSTing an alert. The workflow then extracts message details from Sublime Security using an HTTP Request node, based on the provided messageId, and subsequently splits into two parallel paths. In the first path, the workflow looks up a Slack user by email, aiming to find the recipient of the email that triggered the alert. If a user is found in Slack, a notification is sent to them, explaining that they have received a potentially malicious email that has been quarantined and is under investigation. This notification includes details such as the email's subject and sender. The second path checks whether the flagged email has been opened by inspecting the read_at value from Sublime Security. If the email was opened, the workflow prepares a table summarizing the flagged rules and creates a corresponding issue in Jira Software. The Jira issue contains information about the email, including its subject, sender, and recipient, along with the flagged rules. Issues that someone might encounter when setting up this workflow for the first time include potential problems with the Slack user lookup if the user information is not available or if Slack API integration is not configured correctly. Additionally, the issue creation in Jira Software may not work as expected, as indicated by the note that mentions a need for possible node replacement. Thorough testing and validation with sample data from Sublime Security alerts can help identify and resolve any potential issues during setup.
Multimodal telegram bot with voice, image & video analysis using Claude & Gemini
What it's for: This is a base template for anyone trying to develop a telegram AI Agent. This base allows for multiple inputs (Voice, Picture, Video, and Text inputs) to be processed by an AI model of their choosing to a get a User started. From here, the User may connect any tools that they see fit to the AI Agent for their n8n workflows. How it works: Input: Telegram message to a bot chat n8n Processing: Switch node determines the type: Voice Message Picture Message Video Message Text Message (Currently uses OpenAI and Gemini to analyze Voice/Photo/Video content but feel free to change these nodes with other models) AI Agent Proccessing: LLM of your choosing examines message and based on system prompt, generates an output Output: AI Output is sent back in telegram Message How to use: Create your chat bot and generate access token -> Search Bot father in telegram -> Type "/newbot" -> follow instructions and create access token -> Copy access token Create Credentials in n8n -> Open telegram trigger node -> Click create credential -> Paste access token -> Save Create LLM access token (Different per LLM but search your LLM + API in google) -> (will have to create an account with the LLM platform) -> buy credits to use LLM API -> Generate Access token -> Paste token in LLM node Requirements: Telegram Bot Access Token Google Gemini Access Token (For Picture and Video messages) OpenAI Access Token (For Voice messages) LLM Access Token (Your preference for the AI Agent) Customizing this workflow: To personalize the AI Output, adjust the system prompt (give context or directions on the AI's role) Add tools to the AI agent to give it more utility besides a personalied LLM (Example: Calendars, Databases, etc).
Aws news monitoring & LinkedIn content automation with Claude 3 & Feishu
AWS News Analysis and LinkedIn Automation Pipeline Transform AWS industry news into engaging LinkedIn content with AI-powered analysis and automated approval workflows. Who's it for This template is perfect for: Cloud architects and DevOps engineers who want to stay current with AWS developments Content creators looking to automate their AWS news coverage Marketing teams needing consistent, professional AWS content Technical leaders who want to share industry insights on LinkedIn AWS consultants building thought leadership through automated content How it works This workflow creates a comprehensive AWS news analysis and content generation pipeline with two main flows: Flow 1: News Collection and Analysis Scheduled RSS Monitoring: Automatically fetches latest AWS news from the official AWS RSS feed daily at 8 PM AI-Powered Analysis: Uses AWS Bedrock (Claude 3 Sonnet) to analyze each news item, extracting: Professional summary Key themes and keywords Importance rating (Low/Medium/High) Business impact assessment Structured Data Storage: Saves analyzed news to Feishu Bitable with approval status tracking Flow 2: LinkedIn Content Generation Manual Approval Trigger: Feishu automation sends approved news items to the webhook AI Content Creation: AWS Bedrock generates professional LinkedIn posts with: Attention-grabbing headlines Technical insights from a Solutions Architect perspective Business impact analysis Call-to-action engagement Automated Publishing: Posts directly to LinkedIn with relevant hashtags How to set up Prerequisites AWS Bedrock access with Claude 3 Sonnet model enabled Feishu account with Bitable access LinkedIn company account with posting permissions n8n instance (self-hosted or cloud) Detailed Configuration Steps AWS Bedrock Setup Step 1: Enable Claude 3 Sonnet Model Log into your AWS Console Navigate to AWS Bedrock Go to Model access in the left sidebar Find Anthropic Claude 3 Sonnet and click Request model access Fill out the access request form (usually approved within minutes) Once approved, verify the model appears in your Model access list Step 2: Create IAM User and Credentials Go to IAM Console Click Users → Create user Name: n8n-bedrock-user Attach policy: AmazonBedrockFullAccess (or create custom policy with minimal permissions) Go to Security credentials tab → Create access key Choose Application running outside AWS Download the credentials CSV file Step 3: Configure in n8n In n8n, go to Credentials → Add credential Select AWS credential type Enter your Access Key ID and Secret Access Key Set Region to your preferred AWS region (e.g., us-east-1) Test the connection Useful Links: AWS Bedrock Documentation Claude 3 Sonnet Model Access AWS Bedrock Pricing Feishu Bitable Configuration Step 1: Create Feishu Account and App Sign up at Feishu International Create a new Bitable (multi-dimensional table) Go to Developer Console → Create App Enable Bitable permissions in your app Generate App Token and App Secret Step 2: Create Bitable Structure Create a new Bitable with these columns: title (Text) pubDate (Date) summary (Long Text) keywords (Multi-select) rating (Single Select: Low, Medium, High) link (URL) approval_status (Single Select: Pending, Approved, Rejected) Get your App Token and Table ID: App Token: Found in app settings Table ID: Found in the Bitable URL (tbl...) Step 3: Set Up Automation In your Bitable, go to Automation → Create automation Trigger: When field value changes → Select approval_status field Condition: approval_status equals "Approved" Action: Send HTTP request Method: POST URL: Your n8n webhook URL (from Flow 2) Headers: Content-Type: application/json Body: {{record}} Step 4: Configure Feishu Credentials in n8n Install Feishu Lite community node (self-hosted only) Add Feishu credential with your App Token and App Secret Test the connection Useful Links: Feishu Developer Documentation Bitable API Reference Feishu Automation Guide LinkedIn Company Account Setup Step 1: Create LinkedIn App Go to LinkedIn Developer Portal Click Create App Fill in app details: App name: AWS News Automation LinkedIn Page: Select your company page App logo: Upload your logo Legal agreement: Accept terms Step 2: Configure OAuth2 Settings In your app, go to Auth tab Add redirect URL: https://your-n8n-instance.com/rest/oauth2-credential/callback Request these scopes: wmembersocial (Post on behalf of members) r_liteprofile (Read basic profile) r_emailaddress (Read email address) Step 3: Get Company Page Access Go to your LinkedIn Company Page Navigate to Admin tools → Manage admins Ensure you have Content admin or Super admin role Note your Company Page ID (found in page URL) Step 4: Configure LinkedIn Credentials in n8n Add LinkedIn OAuth2 credential Enter your Client ID and Client Secret Complete OAuth2 flow by clicking Connect my account Select your company page for posting Useful Links: LinkedIn Developer Portal LinkedIn API Documentation LinkedIn OAuth2 Guide Workflow Activation Final Setup Steps: Import the workflow JSON into n8n Configure all credential connections: AWS Bedrock credentials Feishu credentials LinkedIn OAuth2 credentials Update webhook URL in Feishu automation to match your n8n instance Activate the scheduled trigger (daily at 8 PM) Test with manual webhook trigger using sample data Verify Feishu Bitable receives data Test approval workflow and LinkedIn posting Requirements Service Requirements AWS Bedrock with Claude 3 Sonnet model access AWS account with Bedrock service enabled IAM user with Bedrock permissions Model access approval for Claude 3 Sonnet Feishu Bitable for news storage and approval workflow Feishu account (International or Lark) Developer app with Bitable permissions Automation capabilities for webhook triggers LinkedIn Company Account for automated posting LinkedIn company page with admin access LinkedIn Developer app with posting permissions OAuth2 authentication setup n8n community nodes: Feishu Lite node (self-hosted only) Technical Requirements n8n instance (self-hosted recommended for community nodes) Webhook endpoint accessible from Feishu automation Internet connectivity for API calls and RSS feeds Storage space for workflow execution logs Cost Considerations AWS Bedrock: ~$0.01-0.05 per news analysis Feishu: Free tier available, paid plans for advanced features LinkedIn: Free API access with rate limits n8n: Self-hosted (free) or cloud subscription How to customize the workflow Content Customization Modify AI prompts in the AI Agent nodes to change tone, focus, or target audience Adjust hashtags in the LinkedIn posting node for different industries Change scheduling frequency by modifying the Schedule Trigger settings Integration Options Replace LinkedIn with Twitter/X, Facebook, or other social platforms Add Slack notifications for approved content before posting Integrate with CRM systems to track content performance Add content calendar integration for better planning Advanced Features Multi-language support by modifying AI prompts for different regions Content categorization by adding tags for different AWS services Performance tracking by integrating analytics platforms Team collaboration by adding approval workflows with multiple reviewers Technical Modifications Change RSS sources to monitor other AWS blogs or competitor news Adjust AI models to use different Bedrock models or external APIs Add data validation nodes for better error handling Implement retry logic for failed API calls Important Notes Service Limitations This template uses community nodes (Feishu Lite) and requires self-hosted n8n Geo-restrictions may apply to AWS Bedrock models in certain regions Rate limits may affect high-frequency posting - adjust scheduling accordingly Content moderation is recommended before automated posting Cost considerations: Each AI analysis costs approximately $0.01-0.05 USD per news item Troubleshooting Common Issues AWS Bedrock Issues: Model not found: Ensure Claude 3 Sonnet access is approved in your region Access denied: Verify IAM permissions include Bedrock service access Rate limiting: Implement retry logic or reduce analysis frequency Feishu Integration Issues: Authentication failed: Check App Token and App Secret are correct Table not found: Verify Table ID matches your Bitable URL Automation not triggering: Ensure webhook URL is accessible and returns 200 status LinkedIn Posting Issues: OAuth2 errors: Re-authenticate LinkedIn credentials Posting failed: Verify company page admin permissions Rate limits: LinkedIn has daily posting limits for company pages Security Best Practices Never hardcode credentials in workflow nodes Use environment variables for sensitive configuration Regularly rotate API keys and access tokens Monitor API usage to prevent unexpected charges Implement error handling for failed API calls
Evaluation metric example: Check if tool was called
AI evaluation in n8n This is a template for n8n's evaluation feature. Evaluation is a technique for getting confidence that your AI workflow performs reliably, by running a test dataset containing different inputs through the workflow. By calculating a metric (score) for each input, you can see where the workflow is performing well and where it isn't. How it works This template shows how to calculate a workflow evaluation metric: whether a specific tool was called by an agent. We use an evaluation trigger to read in our dataset It is wired up in parallel with the regular trigger so that the workflow can be started from either one. More info We make sure that the agent outputs the list of tools that it used We then check whether the expected tool (from the dataset) is in that list Finally we pass this information back to n8n as a metric
Automate your Magento 2 weekly sales & performance reports
Automatically fetches last week’s Magento 2 orders, summarises sales and product performance, logs the data to Google Sheets, and emails a polished report — all without logging into admin or buying extra modules. Perfect for store owners and agencies. ✅ What This Workflow Does: This workflow automatically fetches the last 7 days of orders from your Magento 2 store, summarizes key metrics (orders, revenue, products sold), groups product performance data, generates a Google Sheet with two tabs (Weekly Summary & Product Breakdown), and emails the report to your team in a modern dashboard-style HTML format — all without ever logging into Magento admin. 🔧 Modules Used: 🕒 Schedule Trigger ⚙️ Code (JavaScript) 🌐 HTTP Request 📊 Google Sheets ✉️ Gmail (HTML Email with custom design) 💼 Use Cases: E-commerce store managers who want weekly sales snapshots without logging into Magento 2. Agencies managing multiple Magento clients and reporting to them weekly. Business owners who want automated insights without paying for extra reporting modules. Marketing or growth teams that need product-wise performance regularly in Google Sheets. 🔒 Credentials Required: Magento 2 API (Bearer Token) Google Sheets OAuth2 Gmail OAuth2 (for sending HTML email reports) 📂 Category: E-commerce / Magento 2 / Reporting / Automation ✨ Why This Saves Hours: This workflow replaces manual Magento report exports, repetitive data cleanup, and weekly email formatting. Store managers often waste 1–2 hours/week collecting data — now it takes 0 minutes. Just set it and forget it. You don’t need to buy reporting extensions, log into the admin, or manually touch a spreadsheet again. 🤝 Need a Custom Workflow? If you need this tailored for your own workflow, store, or client — 📩 Let’s connect. I build custom, high-performance n8n automations for e-commerce, growth, and reporting. Contact: Author 👤 Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
Moderate your Discord server using chatGPT-5 & Google Sheets (Learning system)
Discord AI Content Moderator with Learning System This n8n template demonstrates how to automatically moderate Discord messages using AI-powered content analysis that learns from your community standards. It continuously monitors your server, intelligently flags problematic content while allowing context-appropriate language, and provides a complete audit trail for all moderation actions. Use cases are many: Try moderating a forex trading community where enthusiasm runs high, protecting a gaming server from toxic behavior while keeping banter alive, or maintaining professional standards in a business Discord without being overly strict! Good to know This workflow uses OpenAI's GPT-5 Mini model which incurs API costs per message analyzed (approximately $0.001-0.003 per moderation check depending on message volume) The workflow runs every minute by default - adjust the Schedule Trigger interval based on your server activity and budget Discord API rate limits apply - the batch processor includes 1.5-second delays between deletions to prevent rate limiting You'll need a Google Sheet to store training examples - a template link is provided in the workflow notes The AI analyzes context and intent, not just keywords - "I cking love this community" won't be deleted, but "you guys are sht" will be Deleted messages cannot be recovered from Discord - the admin notification channel preserves the content for review How it works The Schedule Trigger activates every minute to check for new messages requiring moderation We'll fetch training data from Google Sheets containing labeled examples of messages to delete (with reasons) and messages to keep The workflow retrieves the last 10 messages from your specified Discord channel using the Discord API A preparation node formats both the training examples and recent messages into a structured prompt with unique indices for each message The AI Agent (powered by GPT-5 Mini) analyzes each message against your community standards, considering intent and context rather than just keywords The AI returns a JSON array of message indices that violate guidelines (e.g., [0, 2, 5]) A parsing node extracts these indices, validates them, removes duplicates, and maps them to actual Discord message objects The batch processor loops through each flagged message one at a time to prevent API rate limiting and ensure proper error handling Each message is deleted from Discord using the exact message ID A 1.5-second wait prevents hitting Discord's rate limits between operations Finally, an admin notification is posted to your designated admin channel with the deleted message's author, ID, and original content for audit purposes How to use Replace the Discord Server ID, Moderated Channel ID, and Admin Channel ID in the "Edit Fields" node with your server's specific IDs Create a copy of the provided Google Sheets template with columns: messagecontent, shoulddelete (YES/NO), and reason Connect your Discord OAuth2 credentials (requires bot permissions for reading messages, deleting messages, and posting to channels) Add your OpenAI API key to access GPT-5 Mini Customize the AI Agent's system message to reflect your specific community standards and tone Adjust the message fetch limit (default: 10) based on your server activity - higher limits cost more per run but catch more violations Consider changing the Schedule Trigger from every minute to every 3-5 minutes if you have a smaller community Requirements Discord OAuth2 credentials for bot authentication with message read, delete, and send permissions Google Sheets API connection for accessing the training data knowledge base OpenAI API key for GPT-5 Mini model access A Google Sheet formatted with message examples, deletion labels, and reasoning Discord Server ID, Channel IDs (moderated + admin) which you can get by enabling Developer Mode in Discord Customising this workflow Try building an emoji-based feedback system where admins can react to notifications with ✅ (correct deletion) or ❌ (wrong deletion) to automatically update your training data Add a severity scoring system that issues warnings for minor violations before deleting messages Implement a user strike system that tracks repeat offenders and automatically applies temporary mutes or bans Expand the AI prompt to categorize violations (spam, harassment, profanity, etc.) and route different types to different admin channels Create a weekly digest that summarizes moderation statistics and trending violation types Add support for monitoring multiple channels by duplicating the Discord message fetch nodes with different channel IDs Integrate with a database instead of Google Sheets for faster lookups and more sophisticated training data management If you have questions Feel free to contact me here: elijahmamuri@gmail.com elijahfxtrading@gmail.com
Track Singapore COE prices with AI forecasting & smart buy recommendations via Telegram
Introduction Automates Singapore COE price tracking with AI forecasts and buy/wait recommendations. Weekly scraping collects LTA data, enriches with economic indicators, predicts 6-month trends, and alerts users via Telegram/email—helping car buyers and fleet managers make data-driven purchase decisions while avoiding manual tracking. How it Works Weekly trigger scrapes LTA COE → validates → stores in Google Sheets → calculates indicators → AI forecasts trends → multi-scenario analysis → generates buy/wait signals → sends actionable alerts. Setup Steps Add OpenAI/NVIDIA API credentials in n8n Authenticate Google Sheets and create spreadsheet Configure Telegram bot or Gmail SMTP Set weekly trigger (Thursday 9AM SGT post-bidding) Adjust alert thresholds in conditional nodes Workflow Schedule Trigger → Scrape COE → Validate → Store Sheets → Fetch Historical → Calculate Indicators → AI Prediction → Merge Economics → Multi-Scenario Analysis → Compare Conditions → Generate Dashboard → Send Alerts Workflow Steps Scraping: Fetch LTA COE results with retry logic Validation: Check completeness, flag anomalies Storage: Append timestamped records to Sheets Enrichment: Calculate moving averages, volatility, seasonality AI Analysis: Forecast next 6 months with confidence intervals Decision Engine: Output buy/wait/monitor recommendation Reporting: Create dashboard and send alerts via Telegram/email Prerequisites OpenAI/NVIDIA API key, Google Sheets access, Telegram bot token or Gmail, basic COE category understanding Use Cases First-time buyers timing purchases, fleet operators coordinating bulk acquisitions Customization Add SMS alerts via Twilio, integrate loan calculators for total cost analysis Benefits Saves 5+ hours monthly, captures 10–18% price dips, provides predictive insights (potential $10K–$25K savings)
Track software vulnerability patents with ScrapeGraphAI, Matrix, and Intercom
Software Vulnerability Patent Tracker ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud, v1.7.0+) ScrapeGraphAI community node installed Basic understanding of patent search syntax (for customizing keyword sets) Optional: Intercom account for in-app alerts Required Credentials | Credential | Purpose | |------------|---------| | ScrapeGraphAI API Key | Enables ScrapeGraphAI nodes to fetch and parse patent-office webpages | | Intercom Access Token (optional) | Sends weekly digests directly to an Intercom workspace | Additional Setup Requirements | Setting | Recommended Value | Notes | |---------|-------------------|-------| | Cron schedule | 0 9 1 | Triggers every Monday at 09:00 server time | | Patent keyword matrix | See example CSV below | List of comma-separated keywords per tech focus | Example keyword matrix (upload as keywords.csv or paste into the “Matrix” node): topic,keywords Buffer Overflow,"buffer overflow, stack smashing, stack buffer" Memory Safety,"memory safety, safe memory allocation, pointer sanitization" Code Injection,"SQL injection, command injection, injection prevention" How it works This workflow automatically tracks newly-published patent filings that mention software-security vulnerabilities, buffer-overflow mitigation techniques, and related technology keywords. Every week it aggregates fresh patent data from USPTO and international patent databases, filters it by relevance, and delivers a concise JSON digest (and optional Intercom notification) to R&D teams and patent attorneys. Key Steps: Schedule Trigger: Fires weekly based on the configured cron expression. Matrix (Keyword Loader): Loads the CSV-based technology keyword matrix into memory. Code (Build Search Queries): Dynamically assembles patent-search URLs for each keyword group. ScrapeGraphAI (Fetch Results): Scrapes USPTO, EPO, and WIPO result pages and parses titles, abstracts, publication numbers, and dates. If (Relevance Filter): Removes patents older than 1 year or without vulnerability-related terms in the abstract. Set (Normalize JSON): Formats the remaining records into a uniform JSON schema. Intercom (Notify Team): Sends a summarized digest to your chosen Intercom workspace. (Skip or disable this node if you prefer to consume the raw JSON output instead.) Sticky Notes: Contain inline documentation and customization tips for future editors. Set up steps Setup Time: 10-15 minutes Install Community Node Navigate to “Settings → Community Nodes”, search for ScrapeGraphAI, and click “Install”. Create Credentials Go to “Credentials” → “New Credential” → select ScrapeGraphAI API → paste your API key. (Optional) Add an Intercom credential with a valid access token. Import the Workflow Click “Import” → “Workflow JSON” and paste the template JSON, or drag-and-drop the .json file. Configure Schedule Open the Schedule Trigger node and adjust the cron expression if a different frequency is required. Upload / Edit Keyword Matrix Open the Matrix node, paste your custom CSV, or modify existing topics & keywords. Review Search Logic In the Code (Build Search Queries) node, review the base URLs and adjust patent databases as needed. Define Notification Channel If using Intercom, select your Intercom credential in the Intercom node and choose the target channel. Execute & Activate Click “Execute Workflow” for a trial run. Verify the output. If satisfied, switch the workflow to “Active”. Node Descriptions Core Workflow Nodes: Schedule Trigger – Initiates the workflow on a weekly cron schedule. Matrix – Holds the CSV keyword table and makes each row available as an item. Code (Build Search Queries) – Generates search URLs and attaches meta-data for later nodes. ScrapeGraphAI – Scrapes patent listings and extracts structured fields (title, abstract, pub. date, link). If (Relevance Filter) – Applies date and keyword relevance filters. Set (Normalize JSON) – Maps scraped fields into a clean JSON schema for downstream use. Intercom – Sends formatted patent summaries to an Intercom inbox or channel. Sticky Notes – Provide inline documentation and edit history markers. Data Flow: Schedule Trigger → Matrix → Code → ScrapeGraphAI → If → Set → Intercom Customization Examples Change Data Source to Google Patents javascript // In the Code node const base = 'https://patents.google.com/?q='; items.forEach(item => { item.json.searchUrl = ${base}${encodeURIComponent(item.json.keywords)}&oq=${encodeURIComponent(item.json.keywords)}; }); return items; Send Digest via Slack Instead of Intercom javascript // Replace Intercom node with Slack node { "text": 🚀 New Vulnerability-related Patents (${items.length})\n + items.map(i => • <${i.json.link}|${i.json.title}>).join('\n') } Data Output Format The workflow outputs structured JSON data: json { "topic": "Memory Safety", "keywords": "memory safety, safe memory allocation, pointer sanitization", "title": "Memory protection for compiled binary code", "publicationNumber": "US20240123456A1", "publicationDate": "2024-03-21", "abstract": "Techniques for enforcing memory safety in compiled software...", "link": "https://patents.google.com/patent/US20240123456A1/en", "source": "USPTO" } Troubleshooting Common Issues Empty Result Set – Ensure that the keywords are specific but not overly narrow; test queries manually on USPTO. ScrapeGraphAI Timeouts – Increase the timeout parameter in the ScrapeGraphAI node or reduce concurrent requests. Performance Tips Limit the keyword matrix to <50 rows to keep weekly runs under 2 minutes. Schedule the workflow during off-peak hours to reduce load on patent-office servers. Pro Tips: Combine this workflow with a vector database (e.g., Pinecone) to create a semantic patent knowledge base. Add a “Merge” node to correlate new patents with existing vulnerability CVE entries. Use a second ScrapeGraphAI node to crawl citation trees and identify emerging technology clusters.