Vigh Sandor
I'm a DevOps engineer and automation enthusiast who builds smart, practical workflows using n8n.io. I focus on creating reliable, open-source solutions that connect tools and simplify everyday operations — whether it’s infrastructure management, workflow automation, or integrating AI into existing systems.
Templates by Vigh Sandor
Network vulnerability scanner with NMAP and automated CVE reporting
Network Vulnerability Scanner (used NMAP as engine) with Automated CVE Report Workflow Overview This n8n workflow provides comprehensive network vulnerability scanning with automated CVE enrichment and professional report generation. It performs Nmap scans, queries the National Vulnerability Database (NVD) for CVE information, generates detailed HTML/PDF reports, and distributes them via Telegram and email. Key Features Automated Network Scanning: Full Nmap service and version detection scan CVE Enrichment: Automatic vulnerability lookup using NVD API CVSS Scoring: Vulnerability severity assessment with CVSS v3.1/v3.0 scores Professional Reporting: HTML reports with detailed findings and recommendations PDF Generation: Password-protected PDF reports using Prince XML Multi-Channel Distribution: Telegram and email delivery Multiple Triggers: Webhook API, web form, manual execution, scheduled scans Rate Limiting: Respects NVD API rate limits Comprehensive Data: Service detection, CPE matching, CVE details with references Use Cases Regular security audits of network infrastructure Compliance scanning for vulnerability management Penetration testing reconnaissance phase Asset inventory with vulnerability context Continuous security monitoring Vulnerability assessment reporting for management DevSecOps integration for infrastructure testing --- Setup Instructions Prerequisites Before setting up this workflow, ensure you have: System Requirements n8n instance (self-hosted) with command execution capability Alpine Linux base image (or compatible Linux distribution) Minimum 2 GB RAM (4 GB recommended for large scans) 2 GB free disk space for dependencies Network access to scan targets Internet connectivity for NVD API Required Knowledge Basic networking concepts (IP addresses, ports, protocols) Understanding of CVE/CVSS vulnerability scoring Nmap scanning basics External Services Telegram Bot (optional, for Telegram notifications) Email server / SMTP credentials (optional, for email reports) NVD API access (public, no API key required but rate-limited) Step 1: Understanding the Workflow Components Core Dependencies Nmap: Network scanner Purpose: Port scanning, service detection, version identification Usage: Performs TCP SYN scan with service/version detection nmap-helper: JSON conversion tool Repository: https://github.com/net-shaper/nmap-helper Purpose: Converts Nmap XML output to JSON format Prince XML: HTML to PDF converter Website: https://www.princexml.com Version: 16.1 (Alpine 3.20) Purpose: Generates professional PDF reports from HTML Features: Password protection, print-optimized formatting NVD API: Vulnerability database Endpoint: https://services.nvd.nist.gov/rest/json/cves/2.0 Purpose: CVE information, CVSS scores, vulnerability descriptions Rate Limit: Public API allows limited requests per minute Documentation: https://nvd.nist.gov/developers Step 2: Telegram Bot Configuration (Optional) If you want to receive reports via Telegram: Create Telegram Bot Open Telegram and search for @BotFather Start a chat and send /newbot Follow prompts: Bot name: Network Scanner Bot (or your choice) Username: networkscannerbot (must end with 'bot') BotFather will provide: Bot token: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz (save this) Bot URL: https://t.me/yourbotusername Get Your Chat ID Start a chat with your new bot Send any message to the bot Visit: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Find your chat ID in the response Save this chat ID (e.g., 123456789) Alternative: Group Chat ID For sending to a group: Add bot to your group Send a message in the group Check getUpdates URL Group chat IDs are negative: -1001234567890 Add Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select Telegram API Fill in: Access Token: Your bot token from BotFather Click Save Test connection if available Step 3: Email Configuration (Optional) If you want to receive reports via email: Add SMTP Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select SMTP Fill in: Host: SMTP server address (e.g., smtp.gmail.com) Port: SMTP port (587 for TLS, 465 for SSL, 25 for unencrypted) User: Your email username Password: Your email password or app password Secure: Enable for TLS/SSL Click Save Gmail Users: Enable 2-factor authentication Generate app-specific password: https://myaccount.google.com/apppasswords Use app password in n8n credential Step 4: Import and Configure Workflow Configure Basic Parameters Locate "1. Set Parameters" Node: Click the node to open settings Default configuration: network: Input from webhook/form/manual trigger timestamp: Auto-generated (format: yyyyMMdd_HHmmss) report_password: Almafa123456 (change this!) Change Report Password: Edit report_password assignment Set strong password: 12+ characters, mixed case, numbers, symbols This password will protect the PDF report Save changes Step 5: Configure Notification Endpoints Telegram Configuration Locate "14/a. Send Report in Telegram" Node: Open node settings Update fields: Chat ID: Replace -123456789012 with your actual chat ID Credentials: Select your Telegram credential Save changes Message customization: Current: Sends PDF as document attachment Automatic filename: vulnerabilityreport<timestamp>.pdf No caption by default (add if needed) Email Configuration Locate "14/b. Send Report in Email with SMTP" Node: Open node settings Update fields: From Email: report.creator@example.com → Your sender email To Email: report.receiver@example.com → Your recipient email Subject: Customize if needed (default includes network target) Text: Email body message Credentials: Select your SMTP credential Save changes Multiple Recipients: Change toEmail field to comma-separated list: admin@example.com, security@example.com, manager@example.com Add CC/BCC: In node options, add: cc: Carbon copy recipients bcc: Blind carbon copy recipients Step 6: Configure Triggers The workflow supports 4 trigger methods: Trigger 1: Webhook API (Production) Locate "Webhook" Node: Path: /vuln-scan Method: POST Response: Immediate acknowledgment "Process started!" Async: Scan runs in background Trigger 2: Web Form (User-Friendly) Locate "On form submission" Node: Path: /webhook-test/form/target Method: GET (form display), POST (form submit) Form Title: "Add scan parameters" Field: network (required) Form URL: https://your-n8n-domain.com/webhook-test/form/target Users can: Open form URL in browser Enter target network/IP Click submit Receive confirmation Trigger 3: Manual Execution (Testing) Locate "Manual Trigger" Node: Click to activate Opens workflow with "Pre-Set-Target" node Default target: scanme.nmap.org (Nmap's official test server) To change default target: Open "Pre-Set-Target" node Edit network value Enter your test target Save changes Trigger 4: Scheduled Scans (Automated) Locate "Schedule Trigger" Node: Default: Daily at 1:00 AM Uses "Pre-Set-Target" for network To change schedule: Open node settings Modify trigger time: Hour: 1 (1 AM) Minute: 0 Day of week: All days (or select specific days) Save changes Schedule Examples: Every day at 3 AM: Hour: 3, Minute: 0 Weekly on Monday at 2 AM: Hour: 2, Day: Monday Twice daily (8 AM, 8 PM): Create two Schedule Trigger nodes Step 7: Test the Workflow Recommended Test Target Use Nmap's official test server for initial testing: Target: scanme.nmap.org Purpose: Official Nmap testing server Safe: Designed for scanning practice Permissions: Public permission to scan Important: Never scan targets without permission. Unauthorized scanning is illegal. Manual Test Execution Open workflow in n8n editor Click Manual Trigger node to select it Click Execute Workflow button Workflow will start with scanme.nmap.org as target Monitor Execution Watch nodes turn green as they complete: Need to Add Helper?: Checks if nmap-helper installed Add NMAP-HELPER: Installs helper (if needed, ~2-3 minutes) Optional Params Setter: Sets scan parameters 2. Execute Nmap Scan: Runs scan (5-30 minutes depending on target) 3. Parse NMAP JSON to Services: Extracts services (~1 second) 5. CVE Enrichment Loop: Queries NVD API (1 second per service) 8-10. Report Generation: Creates HTML/PDF reports (~5-10 seconds) 12. Convert to PDF: Generates password-protected PDF (~10 seconds) 14a/14b. Distribution: Sends reports Check Outputs Click nodes to view outputs: Parse NMAP JSON: View discovered services CVE Enrichment: See vulnerabilities found Prepare Report Structure: Check statistics Read Report PDF: Download report to verify Verify Distribution Telegram: Open Telegram chat with your bot Check for PDF document Download and open with password Email: Check inbox for report email Verify subject line includes target network Download PDF attachment Open with password --- How to Use Understanding the Scan Process Initiating Scans Method 1: Webhook API Use curl or any HTTP client and add "network" parameter in a POST request. Response: Process started! Scan runs asynchronously. You'll receive results via configured channels (Telegram/Email). Method 2: Web Form Open form URL in browser: https://your-n8n.com/webhook-test/form/target Fill in form: network: Enter target (IP, range, domain) Click Submit Receive confirmation Wait for report delivery Advantages: No command line needed User-friendly interface Input validation Good for non-technical users Method 3: Manual Execution For testing or one-off scans: Open workflow in n8n Edit "Pre-Set-Target" node: Change network value to your target Click Manual Trigger node Click Execute Workflow Monitor progress in real-time Advantages: See execution in real-time Debug issues immediately Test configuration changes View intermediate outputs Method 4: Scheduled Scans For regular, automated security audits: Configure "Schedule Trigger" node with desired time Configure "Pre-Set-Target" node with default target Activate workflow Scans run automatically on schedule Advantages: Automated security monitoring Regular compliance scans No manual intervention needed Consistent scheduling Scan Targets Explained Supported Target Formats Single IP Address: 192.168.1.100 10.0.0.50 CIDR Notation (Subnet): 192.168.1.0/24 Scans 192.168.1.0-255 (254 hosts) 10.0.0.0/16 Scans 10.0.0.0-255.255 (65534 hosts) 172.16.0.0/12 Scans entire 172.16-31.x.x range IP Range: 192.168.1.1-50 Scans 192.168.1.1 to 192.168.1.50 10.0.0.1-10.0.0.100 Scans across range Multiple Targets: 192.168.1.1,192.168.1.2,192.168.1.3 Hostname/Domain: scanme.nmap.org example.com server.local Choosing Appropriate Targets Development/Testing: Use scanme.nmap.org (official test target) Use your own isolated lab network Never scan public internet without permission Internal Networks: Use CIDR notation for entire subnets Scan DMZ networks separately from internal Consider network segmentation in scan design Understanding Report Contents Report Structure The generated report includes: Executive Summary: Total hosts discovered Total services identified Total vulnerabilities found Severity breakdown (Critical, High, Medium, Low, Info) Scan date and time Target network Overall Statistics: Visual dashboard with key metrics Severity distribution chart Quick risk assessment Detailed Findings by Host: For each discovered host: IP address Hostname (if resolved) List of open ports and services Service details: Port number and protocol Service name (e.g., http, ssh, mysql) Product (e.g., Apache, OpenSSH, MySQL) Version (e.g., 2.4.41, 8.2p1, 5.7.33) CPE identifier Vulnerability Details: For each vulnerable service: CVE ID: Unique vulnerability identifier (e.g., CVE-2021-44228) Severity: CRITICAL / HIGH / MEDIUM / LOW / INFO CVSS Score: Numerical score (0.0-10.0) Published Date: When vulnerability was disclosed Description: Detailed vulnerability explanation References: Links to advisories, patches, exploits Recommendations: Immediate actions (patch critical/high severity) Long-term improvements (security processes) Best practices Vulnerability Severity Levels CRITICAL (CVSS 9.0-10.0): Color: Red Characteristics: Remote code execution, full system compromise Action: Immediate patching required Examples: Log4Shell, EternalBlue, Heartbleed HIGH (CVSS 7.0-8.9): Color: Orange Characteristics: Significant security impact, data exposure Action: Patch within days Examples: SQL injection, privilege escalation, authentication bypass MEDIUM (CVSS 4.0-6.9): Color: Yellow Characteristics: Moderate security impact Action: Patch within weeks Examples: Information disclosure, denial of service, XSS LOW (CVSS 0.1-3.9): Color: Green Characteristics: Minor security impact Action: Patch during regular maintenance Examples: Path disclosure, weak ciphers, verbose error messages INFO (CVSS 0.0): Color: Blue Characteristics: No vulnerability found or informational Action: No action required, awareness only Examples: Service version detected, no known CVEs Understanding CPE CPE (Common Platform Enumeration): Standard naming scheme for IT products Used for CVE lookup in NVD database Workflow CPE Handling: Nmap detects service and version Nmap provides CPE (if in database) Workflow uses CPE to query NVD API NVD returns CVEs associated with that CPE Special case: nginx vendor fixed from igor_sysoev to nginx Working with Reports Accessing HTML Report Location: /tmp/vulnerabilityreport<timestamp>.html Viewing: Open in web browser directly from n8n Click "11. Read Report for Output" node Download HTML file Open locally in any browser Advantages: Interactive (clickable links) Searchable text Easy to edit/customize Smaller file size Accessing PDF Report Location: /tmp/vulnerabilityreport<timestamp>.pdf Password: Default: Almafa123456 (configured in "1. Set Parameters") Change in workflow before production use Required to open PDF Opening PDF: Receive PDF via Telegram or Email Open with PDF reader (Adobe, Foxit, Browser) Enter password when prompted View, print, or share Advantages: Professional appearance Print-optimized formatting Password protection Portable (works anywhere) Preserves formatting Report Customization Change Report Title: Open "8. Prepare Report Structure" node Find metadata object Edit title and subtitle fields Customize Styling: Open "9. Generate HTML Report" node Modify CSS in <style> section Change colors, fonts, layout Add Company Logo: Edit HTML generation code Add <img> tag in header section Include base64-encoded logo or URL Modify Recommendations: Open "9. Generate HTML Report" node Find <h2>Recommendations</h2> section Edit recommendation text Scanning Ethics and Legality Authorization is Mandatory: Never scan networks without explicit written permission Unauthorized scanning is illegal in most jurisdictions Can result in criminal charges and civil liability Scope Definition: Document approved scan scope Exclude out-of-scope systems Maintain scan authorization documents Notification: Inform network administrators before scans Provide scan window and source IPs Have emergency contact procedures Safe Targets for Testing: scanme.nmap.org: Official Nmap test server Your own isolated lab network Cloud instances you own Explicitly authorized environments Compliance Considerations PCI DSS: Quarterly internal vulnerability scans required Scan all system components Re-scan after significant changes Document scan results HIPAA: Regular vulnerability assessments required Risk analysis and management Document remediation efforts ISO 27001: Vulnerability management process Regular technical vulnerability scans Document procedures NIST Cybersecurity Framework: Identify vulnerabilities (DE.CM-8) Maintain inventory Implement vulnerability management --- License and Credits Workflow: Created for n8n workflow automation Free for personal and commercial use Modify and distribute as needed No warranty provided Dependencies: Nmap: GPL v2 - https://nmap.org nmap-helper: Open source - https://github.com/net-shaper/nmap-helper Prince XML: Commercial license required for production use - https://www.princexml.com NVD API: Public API by NIST - https://nvd.nist.gov Third-Party Services: Telegram Bot API: https://core.telegram.org/bots/api SMTP: Standard email protocol --- Support For Nmap issues: Documentation: https://nmap.org/book/ Community: https://seclists.org/nmap-dev/ For NVD API issues: Status page: https://nvd.nist.gov Contact: https://nvd.nist.gov/general/contact For Prince XML issues: Documentation: https://www.princexml.com/doc/ Support: https://www.princexml.com/doc/help/ --- Workflow Metadata External Dependencies: Nmap, nmap-helper, Prince XML, NVD API License: Open for modification and commercial use --- Security Disclaimer This workflow is provided for legitimate security testing and vulnerability assessment purposes only. Users are solely responsible for ensuring they have proper authorization before scanning any network or system. Unauthorized network scanning is illegal and unethical. The authors assume no liability for misuse of this workflow or any damages resulting from its use. Always obtain written permission before conducting security assessments.
Proxmox system monitor - VM status, host resources & temperature alerts via Telegram
Setup Instructions Overview This n8n workflow monitors your Proxmox VE server and sends automated reports to Telegram every 15 minutes. It tracks VM status, host resource usage, temperature sensors, and detects recently stopped VMs. Prerequisites Required Software n8n instance (self-hosted or cloud) Proxmox VE server with API access Telegram account with bot created via BotFather lm-sensors package installed on Proxmox host Required Access Proxmox admin credentials (username and password) SSH access to Proxmox server Telegram Bot API token Telegram Chat ID Installation Steps Step 1: Install Temperature Sensors on Proxmox SSH into your Proxmox server and run: bash apt-get update apt-get install -y lm-sensors sensors-detect Press ENTER to accept default answers during sensors-detect setup. Test that sensors work: bash sensors | grep -E 'Package|Core' Step 2: Create Telegram Bot Open Telegram and search for BotFather Send /newbot command Follow prompts to create your bot Save the API token provided Get your Chat ID by sending a message to your bot, then visiting: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Look for "chat":{"id": YOURCHATID in the response Step 3: Configure n8n Credentials SSH Password Credential In n8n, go to Credentials menu Create new credential: SSH Password Enter: Host: Your Proxmox IP address Port: 22 Username: root (or your admin user) Password: Your Proxmox password Telegram API Credential Create new credential: Telegram API Enter the Bot Token from BotFather Step 4: Import and Configure Workflow Import the JSON workflow into n8n Open the "Set Variables" node Update the following values: PROXMOX_IP: Your Proxmox server IP address PROXMOX_PORT: API port (default: 8006) PROXMOX_NODE: Node name (default: pve) TELEGRAMCHATID: Your Telegram chat ID PROXMOX_USER: Proxmox username with realm (e.g., root@pam) PROXMOX_PASSWORD: Proxmox password Connect credentials: SSH - Get Sensors node: Select your SSH credential Send Telegram Report node: Select your Telegram credential Save the workflow Activate the workflow Configuration Options Adjust Monitoring Interval Edit the "Schedule Every 15min" node: Change minutesInterval value to desired interval (in minutes) Recommended: 5-30 minutes Adjust Recently Stopped VM Detection Window Edit the "Process Data" node: Find line: const fifteenMinutesAgo = now - 900; Change 900 to desired seconds (900 = 15 minutes) Modify Temperature Warning Threshold The workflow uses the "high" threshold defined by sensors. To manually set threshold, edit "Process Data" node: Modify the temperature parsing logic Change comparison: if (current >= high) to use custom value Testing Test Individual Components Execute "Set Variables" node manually - verify output Execute "Proxmox Login" node - check for valid ticket Execute "API - VM List" - confirm VM data received Execute complete workflow - check Telegram for message Troubleshooting Login fails: Verify PROXMOX_USER format includes realm (e.g., root@pam) Check password is correct Ensure allowUnauthorizedCerts is enabled for self-signed certificates No temperature data: Verify lm-sensors is installed on Proxmox Run sensors command manually via SSH Check SSH credentials are correct Recently stopped VMs not detected: Check task log API endpoint returns data Verify VM was stopped within detection window Ensure task types qmstop or qmshutdown are logged Telegram not receiving messages: Verify bot token is correct Confirm chat ID is accurate Check bot was started (send /start to bot) Verify parse_mode is set to HTML in Telegram node --- How It Works Workflow Architecture The workflow executes in a sequential chain of nodes that gather data from multiple sources, process it, and deliver a formatted report. Execution Flow Schedule Trigger (15min) Set Variables Proxmox Login (get authentication ticket) Prepare Auth (prepare credentials for API calls) API - VM List (get all VMs and their status) API - Node Tasks (get recent task log) API - Node Status (get host CPU, memory, uptime) SSH - Get Sensors (get temperature data) Process Data (analyze and structure all data) Generate Formatted Message (create Telegram message) Send Telegram Report (deliver via Telegram) Data Collection VM Information (Proxmox API) Endpoint: /api2/json/nodes/{node}/qemu Retrieves: Total VM count Running VM count Stopped VM count VM names and IDs Task Log (Proxmox API) Endpoint: /api2/json/nodes/{node}/tasks?limit=100 Retrieves recent tasks to detect: qmstop operations (VM stop commands) qmshutdown operations (VM shutdown commands) Task timestamps Task status Host Status (Proxmox API) Endpoint: /api2/json/nodes/{node}/status Retrieves: CPU usage percentage Memory total and used (in GB) System uptime (in seconds) Temperature Data (SSH) Command: sensors | grep -E 'Package|Core' Retrieves: CPU package temperature Individual core temperatures High and critical thresholds Data Processing VM Status Analysis Counts total, running, and stopped VMs Queries task log for stop/shutdown operations Filters tasks within 15-minute window Extracts VM ID from task UPID string Matches VM ID to VM name from VM list Calculates time elapsed since stop operation Temperature Intelligence The workflow implements smart temperature reporting: Normal Operation (all temps below high threshold): Calculates average temperature across all cores Displays min, max, and average values Example: "Average: 47.5 C (Min: 44.0 C, Max: 52.0 C)" Warning State (any temp at or above high threshold): Displays all temperature readings in detail Shows full sensor output with thresholds Changes section title to "Temperature Warning" Adds fire emoji indicator Resource Calculation CPU Usage: API returns decimal (0.0 to 1.0) Converted to percentage: cpu * 100 Memory: API returns bytes Converted to GB: bytes / (1024^3) Calculates percentage: (used / total) * 100 Uptime: API returns seconds Converted to days and hours: days = seconds / 86400, hours = (seconds % 86400) / 3600 Report Generation Message Structure The Telegram message uses HTML formatting for structure: Header Section Report title Generation timestamp Virtual Machines Section Total VM count Running VMs with checkmark Stopped VMs with stop sign Recently stopped count with warning Detailed list if VMs stopped in last 15 minutes Host Resources Section CPU usage percentage Memory used/total with percentage Host uptime in days and hours Temperature Section Smart display (summary or detailed) Warning indicator if thresholds exceeded Monospace formatting for sensor output HTML Formatting Features Bold tags for headers and labels Italic for timestamps Code blocks for temperature data Unicode separators for visual structure Emoji indicators for status (checkmark, stop, warning, fire) Security Considerations Credential Storage Passwords stored in n8n Set node (encrypted in database) Alternative: Use n8n environment variables Recommendation: Use Proxmox API tokens instead of passwords API Communication HTTPS with self-signed certificate acceptance Authentication via session tickets (15-minute validity) CSRF token validation for API requests SSH Access Password-based authentication (can use key-based) Commands limited to read-only operations No privilege escalation required Performance Impact API Load 3 API calls per execution (VM list, tasks, status) Lightweight endpoints with minimal data 15-minute interval reduces server load Execution Time Typical workflow execution: 5-10 seconds Login: 1-2 seconds API calls: 2-3 seconds SSH command: 1-2 seconds Processing: less than 1 second Resource Usage Minimal CPU impact on Proxmox Small memory footprint Negligible network bandwidth Extensibility Adding Additional Metrics To monitor additional data points: Add new API call node after "Prepare Auth" Update "Process Data" node to include new data Modify "Generate Formatted Message" for display Integration with Other Services The workflow can be extended to: Send to Discord, Slack, or email Write to database or log file Trigger alerts based on thresholds Generate charts or graphs Multi-Node Monitoring To monitor multiple Proxmox nodes: Duplicate API call nodes Update node names in URLs Merge data in processing step Generate combined report
Generate text & image responses in Telegram channels with GPT-4 and TGPT
Telegram AI Channel Bot - Text & Image Response Generator with TGPT Overview This n8n workflow creates an automated Telegram channel bot that responds to messages with AI-generated text or images using TGPT. The bot monitors a specific Telegram channel and generates responses based on message prefixes. Features Automated text response generation using TGPT Image generation capabilities with customizable dimensions (1920x1080) Duplicate message prevention Time-window filtering (15 seconds) to process only recent messages Continuous polling with 10-second intervals Setup Instructions Prerequisites n8n Instance: Ensure you have n8n installed and running Telegram Bot: Create a new bot via @BotFather on Telegram Telegram Channel: Create or have admin access to a Telegram channel Linux Environment: The workflow requires Linux for command execution Configuration Steps Obtain Telegram Bot Token Open Telegram and search for @BotFather Send /newbot and follow the prompts Save the bot token provided (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Get Channel ID Add your bot as an administrator to your Telegram channel Send a test message to the channel Visit: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Look for "chat":{"id":-100XXXXXXXXXX} - this is your channel ID Configure the Workflow Import the workflow JSON into n8n Open the Config node Replace yourtelegramtoken with your actual bot token Replace yourtelegramchannel_id with your channel ID Save the changes Set Up Telegram Credentials in n8n Navigate to the Send Telegram Text Response node Click on the credentials field Create new Telegram credentials using your bot token Apply the same credentials to Send Telegram Image Response node System Requirements The workflow automatically installs required packages: util-linux-misc (for script command) curl (for downloading TGPT) TGPT binary (downloaded automatically from GitHub) Activation Save all configuration changes Toggle the workflow to Active status The bot will start polling every 10 seconds How to Use Sending Commands to the Bot Text Generation To generate a text response, send a message to your Telegram channel with the following format: amYour prompt here Example: amExplain quantum computing in simple terms The bot will: Remove the am prefix Send the prompt to TGPT with GPT-4 model Generate a response with temperature 0.3 (more focused/deterministic) Reply with the generated text in the channel Image Generation To generate an image, send a message with this format: amiYour image description here Example: amiA futuristic city with flying cars at sunset The bot will: Remove the ami prefix Use TGPT to generate an image (1920x1080 resolution) Save the image temporarily Send the generated image to the channel Important Usage Notes Response Time The bot checks for new messages every 10 seconds Messages older than 15 seconds are ignored Expect a delay of 10-30 seconds for responses depending on generation time Message Processing Only messages from the configured channel are processed The bot maintains a list of processed message IDs to avoid duplicates Maximum of 15 messages are retrieved per polling cycle Limitations Text generation uses temperature 0.3 (less creative, more accurate) Image generation uses temperature 0.7 (more creative) Images are generated at 1920x1080 resolution The bot requires continuous n8n execution Troubleshooting Bot Not Responding Verify the workflow is active Check that the bot is an admin in the channel Confirm the channel ID is correct (negative number for channels) Ensure messages start with exact prefixes: am or ami Duplicate Responses The workflow includes duplicate prevention If issues persist, restart the workflow to clear the processed IDs cache Missing Dependencies The workflow automatically downloads TGPT on first run If errors occur, check the Execute nodes' output for installation issues Performance Issues Consider increasing the polling interval if server resources are limited Monitor the n8n execution logs for timeout errors Advanced Configuration Modify Polling Interval Edit the Schedule node to change the 10-second interval Adjust Time Window In the Process Offset node, modify timeWindowSeconds variable (default: 15) Change AI Model Parameters Text generation: Edit --temperature "0.3" in Execute - Text node Image generation: Edit --temperature "0.7" in Execute - Image node Both use --model "gtp-4" by default Customize Image Dimensions In Execute - Image node, modify: --height=1080 --width=1920 Security Considerations Keep your bot token private Use private channels to prevent unauthorized access Regularly monitor bot activity through Telegram's BotFather Consider implementing rate limiting for production use Maintenance Regularly check n8n logs for errors Update TGPT version URL in Execute nodes when new versions are released Clear /tmp/ directory periodically to remove temporary files Monitor disk space for image generation temporary files
Kubernetes deployment & pod monitoring with Telegram alerts
SETUP INSTRUCTIONS Configure Kubeconfig Open the "Kubeconfig Setup" node Paste your entire kubeconfig file content into the kubeconfigContent variable Set your target namespace in the namespace variable (default: 'production') Example kubeconfig format: yaml apiVersion: v1 kind: Config clusters: cluster: certificate-authority-data: LS0tLS1CRUd... server: https://your-cluster.example.com:6443 name: your-cluster contexts: context: cluster: your-cluster user: your-user name: your-context current-context: your-context users: name: your-user user: token: eyJhbGciOiJSUzI1... Telegram Configuration Create a Telegram bot via @BotFather Get your bot token and add it as a credential in n8n (Telegram API) Find your chat ID: Message your bot Visit: https://api.telegram.org/bot<YourBotToken>/getUpdates Look for "chat":{"id":...} Open the "Send Telegram Alert" node Replace YOURTELEGRAMCHAT_ID with your actual chat ID Select your Telegram API credential Schedule Configuration Open the "Schedule Trigger" node Default: runs every 1 minute Adjust the interval based on your monitoring needs: Every 5 minutes: Change field to minutes and set minutesInterval to 5 Every hour: Change field to hours and set hoursInterval to 1 Cron expression: Use custom cron schedule kubectl Installation The workflow automatically downloads kubectl (v1.34.0) during execution No pre-installation required on the n8n host kubectl is downloaded and used temporarily for each execution HOW IT WORKS Workflow Steps Schedule Trigger Runs automatically based on configured interval Initiates the monitoring cycle Kubeconfig Setup Loads the kubeconfig and namespace configuration Passes credentials to kubectl commands Parallel Data Collection Get Pods: Fetches all pods from the specified namespace Get Deployments: Fetches all deployments from the specified namespace Both commands run in parallel for efficiency Process & Generate Report Parses pod and deployment data Groups pods by their owner (Deployment, DaemonSet, StatefulSet, or Node) Calculates readiness statistics for each workload Detects alerts: workloads with 0 ready pods Generates a comprehensive Markdown report including: Deployments with replica counts and pod details Other workloads (DaemonSets, StatefulSets, Static Pods) Standalone pods (if any) Pod-level details: status, node, restart count Has Alerts? Checks if any workloads have 0 ready pods Routes to appropriate action Send Telegram Alert (if alerts exist) Sends formatted alert message to Telegram Includes: Namespace information List of problematic workloads Full status report Save Report Saves the Markdown report to a file Filename format: k8s-report-YYYY-MM-DD-HHmmss.md Always executes, regardless of alert status Security Features Temporary kubectl: Downloaded and used only during execution Temporary kubeconfig: Written to /tmp/kubeconfig-<random>.yaml Automatic cleanup: Kubeconfig file is deleted after each kubectl command No persistent credentials: Nothing stored on disk between executions Alert Logic Alerts are triggered when any workload has zero ready pods: Deployments with readyReplicas < 1 DaemonSets with numberReady < 1 StatefulSets with readyReplicas < 1 Static Pods (Node-owned) with no ready instances Report Sections Deployments: All Deployment-managed pods (via ReplicaSets) Other Workloads: DaemonSets, StatefulSets, and Static Pods (kube-system components) Standalone Pods: Pods without recognized owners (rare) Alerts: Summary of workloads requiring attention KEY FEATURES Automatic kubectl management - No pre-installation needed Multi-workload support - Deployments, DaemonSets, StatefulSets, Static Pods Smart pod grouping - Uses Kubernetes ownerReferences Conditional alerting - Only notifies when issues detected Detailed reporting - Pod-level status, node placement, restart counts Secure credential handling - Temporary files, automatic cleanup Markdown format - Easy to read and store TROUBLESHOOTING Issue: "Cannot read properties of undefined" Ensure both "Get Pods" and "Get Deployments" nodes execute successfully Check that kubectl can access your cluster with the provided kubeconfig Issue: No alerts when there should be Verify the namespace contains deployments or workloads Check that pods are actually not ready (use kubectl get pods -n <namespace>) Issue: Telegram message not sent Verify Telegram API credential is configured correctly Confirm chat ID is correct and the bot has permission to message you Check that the bot was started (send /start to the bot) Issue: kubectl download fails Check internet connectivity from n8n host Verify access to dl.k8s.io domain Consider pre-installing kubectl on the host and removing the download commands CUSTOMIZATION Change Alert Threshold Edit the Process & Generate Report node to change when alerts trigger: javascript // Change from "< 1" to your desired threshold if (readyReplicas < 2) { // Alert if less than 2 ready pods alerts.push({...}); } Monitor Multiple Namespaces Duplicate the workflow for each namespace Or modify "Kubeconfig Setup" to loop through multiple namespaces Custom Report Format Edit the markdown generation in Process & Generate Report node to customize: Section order Information displayed Formatting style Additional Notification Channels Add nodes after "Has Alerts?" to send notifications via: Email (SMTP node) Slack (Slack node) Discord (Discord node) Webhook (HTTP Request node)
Automated Kubernetes testing with Robot Framework, ArgoCD & with KinD lifecycle
Overview This n8n workflow provides automated CI/CD testing for Kubernetes applications using KinD (Kubernetes in Docker). It creates temporary infrastructure, runs tests, and cleans up everything automatically. --- Three-Phase Lifecycle INIT Phase - Infrastructure Setup Installs dependencies (sshpass, Docker, KinD) Creates KinD cluster Installs Helm and Nginx Ingress Installs HAProxy for port forwarding Deploys ArgoCD Applies ApplicationSet TEST Phase - Automated Testing Downloads Robot Framework test script from GitLab Installs Robot Framework and Browser library Executes automated browser tests Packages test results Sends results via Telegram DESTROY Phase - Complete Cleanup Removes HAProxy Deletes KinD cluster Uninstalls KinD Uninstalls Docker Sends completion notification --- Execution Modes Full Pipeline Mode (progress_only = false) > Automatically progresses through all phases: INIT → TEST → DESTROY Single Phase Mode (progress_only = true) > Executes only the specified phase and stops --- Prerequisites Local Environment (n8n Host) n8n instance version 1.0 or higher Community node n8n-nodes-robotframework installed Network access to target host and GitLab Minimum 4 GB RAM, 20 GB disk space Remote Target Host Linux server (Ubuntu, Debian, CentOS, Fedora, or Alpine) SSH access with sudo privileges Minimum 8 GB RAM (16 GB recommended) 20 GB free disk space Open ports: 22, 80, 60080, 60443, 56443 External Services GitLab account with OAuth2 application Repository with test files (test.robot, config.yaml, demo-applicationSet.yaml) Telegram Bot for notifications Telegram Chat ID --- Setup Instructions Step 1: Install Community Node In n8n web interface, navigate to Settings → Community Nodes Install n8n-nodes-robotframework Restart n8n if prompted Step 2: Configure GitLab OAuth2 Create GitLab OAuth2 Application Log in to GitLab Navigate to User Settings → Applications Create new application with redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Grant scopes: readapi, readrepository, read_user Copy Application ID and Secret Configure in n8n Create new GitLab OAuth2 API credential Enter GitLab server URL, Client ID, and Secret Connect and authorize Step 3: Prepare GitLab Repository Create repository structure: your-repo/ ├── test.robot ├── config.yaml ├── demo-applicationSet.yaml └── .gitlab-ci.yml Upload your: Robot Framework test script KinD cluster configuration ArgoCD ApplicationSet manifest Step 4: Configure Telegram Bot Create Bot Open Telegram, search for @BotFather Send /newbot command Save the API token Get Chat ID For personal chat: Send message to your bot Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Copy the chat ID (positive number) For group chat: Add bot to group Send message mentioning the bot Visit getUpdates endpoint Copy group chat ID (negative number) Configure in n8n Create Telegram API credential Enter bot token Save credential Step 5: Prepare Target Host Verify SSH access: Test connection: ssh -p <port> <username>@<host_ip> Verify sudo: sudo -v The workflow will automatically install dependencies. Step 6: Import and Configure Workflow Import Workflow Copy workflow JSON In n8n, click Workflows → Import from File/URL Import the JSON Configure Parameters Open Set Parameters node and update: | Parameter | Description | Example | |-----------|-------------|---------| | target_host | IP address of remote host | 192.168.1.100 | | target_port | SSH port | 22 | | target_user | SSH username | ubuntu | | targetpassword | SSH password | yourpassword | | progress | Starting phase | INIT, TEST, or DESTROY | | progress_only | Execution mode | true or false | | KIND_CONFIG | Path to config.yaml | config.yaml | | ROBOT_SCRIPT | Path to test.robot | test.robot | | ARGOCD_APPSET | Path to ApplicationSet | demo-applicationSet.yaml | > Security: Use n8n credentials or environment variables instead of storing passwords in the workflow. Configure GitLab Nodes For each of the three GitLab nodes: Set Owner (username or organization) Set Repository name Set File Path (uses parameter from Set Parameters) Set Reference (branch: main or master) Select Credentials (GitLab OAuth2) Configure Telegram Nodes Send ROBOT Script Export Pack node: Set Chat ID Select Credentials Process Finish Report node: Update chat ID in command Step 7: Test and Execute Test individual components first Run full workflow Monitor execution (30-60 minutes total) --- How to Use Execution Examples Complete Testing Pipeline progress = "INIT" progress_only = "false" Flow: INIT → TEST → DESTROY Setup Infrastructure Only progress = "INIT" progress_only = "true" Flow: INIT → Stop Test Existing Infrastructure progress = "TEST" progress_only = "false" Flow: TEST → DESTROY Cleanup Only progress = "DESTROY" Flow: DESTROY → Complete Trigger Methods Manual Execution Open workflow in n8n Set parameters Click Execute Workflow Scheduled Execution Open Schedule Trigger node Configure time (default: 1 AM daily) Ensure workflow is Active Webhook Trigger Configure webhook in GitLab repository Add webhook URL to GitLab CI Monitoring Execution In n8n Interface: View progress in Executions tab Watch node-by-node execution Check output details Via Telegram: Receive test results after TEST phase Receive completion notification after DESTROY phase Execution Timeline: | Phase | Duration | |-------|----------| | INIT | 15-25 minutes | | TEST | 5-10 minutes | | DESTROY | 5-10 minutes | Understanding Test Results After TEST phase, receive testing-export-pack.tar.gz via Telegram containing: log.html - Detailed test execution log report.html - Test summary report output.xml - Machine-readable results screenshots/ - Browser screenshots To view: Download .tar.gz from Telegram Extract: tar -xzf testing-export-pack.tar.gz Open report.html for summary Open log.html for detailed steps Success indicators: All tests marked PASS Screenshots show expected UI states No error messages in logs Failure indicators: Tests marked FAIL Error messages in logs Unexpected UI states in screenshots --- Configuration Files test.robot Robot Framework test script structure: Uses Browser library Connects to http://autotest.innersite Logs in with autotest/autotest Takes screenshots Runs in headless Chromium config.yaml KinD cluster configuration: 1 control-plane node 1 worker node Port mappings: 60080 (HTTP), 60443 (HTTPS), 56443 (API) Kubernetes version: v1.30.2 demo-applicationSet.yaml ArgoCD Application manifest: Points to Git repository Automatic sync enabled Deploys to default namespace gitlab-ci.yml Triggers n8n workflow on commits: Installs curl Sends POST request to webhook --- Troubleshooting SSH Permission Denied Symptoms: Error: Permission denied (publickey,password) Solutions: Verify password is correct Check SSH authentication method Ensure user has sudo privileges Use SSH keys instead of passwords Docker Installation Fails Symptoms: Error: Package docker-ce is not available Solutions: Check OS version compatibility Verify network connectivity Manually add Docker repository KinD Cluster Creation Timeout Symptoms: Error: Failed to create cluster: timed out Solutions: Check available resources (RAM/CPU/disk) Verify Docker daemon status Pre-pull images Increase timeout ArgoCD Not Accessible Symptoms: Error: Failed to connect to autotest.innersite Solutions: Check HAProxy status: systemctl status haproxy Verify /etc/hosts entry Check Ingress: kubectl get ingress -n argocd Test port forwarding: curl http://127.0.0.1:60080 Robot Framework Tests Fail Symptoms: Error: Chrome failed to start Solutions: Verify Chromium installation Check Browser library: rfbrowser show-trace Ensure correct executablePath in test.robot Install missing dependencies Telegram Notification Not Received Symptoms: Workflow completes but no message Solutions: Verify Chat ID Test Telegram API manually Check bot status Re-add bot to group Workflow Hangs Symptoms: Node shows "Executing..." indefinitely Solutions: Check n8n logs Test SSH connection manually Verify target host status Add timeouts to commands --- Best Practices Development Workflow Test locally first Run Robot Framework tests on local machine Verify test script syntax Version control Keep all files in Git Use branches for experiments Tag stable versions Incremental changes Make small testable changes Test each change separately Backup data Export workflow regularly Save test results Store credentials securely Production Deployment Separate environments Dev: Frequent testing Staging: Pre-production validation Production: Stable scheduled runs Monitoring Set up execution alerts Monitor host resources Track success/failure rates Disaster recovery Document cleanup procedures Keep backup host ready Test restoration process Security Use SSH keys Rotate credentials quarterly Implement network segmentation Maintenance Schedule | Frequency | Tasks | |-----------|-------| | Daily | Review logs, check notifications | | Weekly | Review failures, check disk space | | Monthly | Update dependencies, test recovery | | Quarterly | Rotate credentials, security audit | --- Advanced Topics Custom Configurations Multi-node clusters: Add more worker nodes for production-like environments Configure resource limits Add custom port mappings Advanced testing: Load testing with multiple iterations Integration testing for full deployment pipeline Chaos engineering with failure injection Integration with Other Tools Monitoring: Prometheus for metrics collection Grafana for visualization Logging: ELK stack for log aggregation Custom dashboards CI/CD Integration: Jenkins pipelines GitHub Actions Custom webhooks --- Resource Requirements Minimum | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 2 | 4 GB | 20 GB | | Target Host | 4 | 8 GB | 20 GB | Recommended | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 4 | 8 GB | 50 GB | | Target Host | 8 | 16 GB | 50 GB | --- Useful Commands KinD List clusters: kind get clusters Get kubeconfig: kind get kubeconfig --name automate-tst Export logs: kind export logs --name automate-tst Docker List containers: docker ps -a --filter "name=automate-tst" Enter control plane: docker exec -it automate-tst-control-plane bash View logs: docker logs automate-tst-control-plane Kubernetes Get all resources: kubectl get all -A Describe pod: kubectl describe pod -n argocd <pod-name> View logs: kubectl logs -n argocd <pod-name> --follow Port forward: kubectl port-forward -n argocd svc/argocd-server 8080:80 Robot Framework Run tests: robot test.robot Run specific test: robot -t "Test Name" test.robot Generate report: robot --outputdir results test.robot --- Additional Resources Official Documentation n8n: https://docs.n8n.io KinD: https://kind.sigs.k8s.io ArgoCD: https://argo-cd.readthedocs.io Robot Framework: https://robotframework.org Browser Library: https://marketsquare.github.io/robotframework-browser Community n8n Community: https://community.n8n.io Kubernetes Slack: https://kubernetes.slack.com ArgoCD Slack: https://argoproj.github.io/community/join-slack Robot Framework Forum: https://forum.robotframework.org Related Projects k3s: Lightweight Kubernetes distribution minikube: Local Kubernetes alternative Flux CD: Alternative GitOps tool Playwright: Alternative browser automation
Automated rsync backup with password auth & alert system
Automated Rsync Backup with Password Auth & Alert System Overview This n8n workflow provides automated rsync backup capabilities between servers using password authentication. It automatically installs required dependencies, performs the backup operation from a source server to a target server, and sends status notifications via Telegram and SMS. Features Password-based SSH authentication (no key management required) Automatic dependency installation (sshpass, rsync) Cross-platform support (Ubuntu/Debian, RHEL/CentOS, Alpine) Source-to-target backup execution Multi-channel notifications (Telegram and SMS) Detailed success/failure reporting Manual trigger for on-demand backups Setup Instructions Prerequisites n8n Instance: Running n8n with Linux environment Server Access: SSH access to both source and target servers Telegram Bot: Created via @BotFather (optional) Textbelt API Key: For SMS notifications (optional) Network: Connectivity between n8n, source, and target servers Server Requirements Source Server: SSH access enabled User with sudo privileges (for package installation) Read access to source folder Target Server: SSH access enabled Write access to target folder Sufficient storage space Configuration Steps Server Parameters Configuration Open the Server Parameters node and configure: Source Server Settings: source_host: IP address or hostname of source server source_port: SSH port (typically 22) source_user: Username for source server source_password: Password for source user source_folder: Full path to folder to backup (e.g., /home/user/data) Target Server Settings: target_host: IP address or hostname of target server target_port: SSH port (typically 22) target_user: Username for target server target_password: Password for target user target_folder: Full path to destination folder (e.g., /backup/data) Rsync Options: rsync_options: Default is -avz --delete -a: Archive mode (preserves permissions, timestamps, etc.) -v: Verbose output -z: Compression during transfer --delete: Remove files from target that don't exist in source Notification Setup (Optional) Telegram Configuration: Create bot via @BotFather on Telegram Get bot token (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Create notification channel Add bot as administrator Get channel ID: Send test message to channel Visit: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Find "chat":{"id":-100XXXXXXXXXX} SMS Configuration: Register at https://textbelt.com Purchase credits Obtain API key Update Notification Node: Edit Process Finish Report --- Telegram & SMS node: Replace YOUR-TELEGRAM-BOT-TOKEN with bot token Replace YOUR-TELEGRAM-CHANNEL-ID with channel ID Replace +36301234567 with target phone number(s) Replace YOUR-TEXTBELT-API-KEY with Textbelt key Security Considerations Password Storage: Consider using n8n credentials for sensitive passwords Avoid hardcoding passwords in workflow Use environment variables where possible SSH Security: Workflow uses StrictHostKeyChecking=no for automation Consider adding known hosts manually for production Review firewall rules between servers Testing Start with small test folder Verify network connectivity: ping sourcehost and ping targethost Test SSH access manually first Run workflow with test data Verify backup completion on target server How to Use Automatic Operation Once activated, the workflow runs automatically: Frequency: Every days midnight Manual Execution Open the workflow in n8n Click on Manual Trigger node Click "Execute Workflow" Monitor execution progress Scheduled Execution To automate backups: Replace Manual Trigger with Schedule Trigger node Configure schedule (e.g., daily at 2 AM) Save and activate workflow Workflow Process Step 1: Dependency Check The workflow automatically: Checks if sshpass is installed locally Installs if missing (supports apt, yum, dnf, apk) Checks sshpass on source server Installs on source if needed (with sudo) Step 2: Backup Execution Connects to source server via SSH Executes rsync command from source to target Uses password authentication for both connections Transfers data directly between servers (not through n8n) Step 3: Status Reporting Success Message Format: [Timestamp] -- SUCCESS :: sourcehost:/path -> targethost:/path :: [rsync output] Failure Message Format: [Timestamp] -- ERROR :: sourcehost -> targethost :: [exit code] -- [error message] Rsync Options Guide Common Options: -a: Archive mode (recommended) -v: Verbose output for monitoring -z: Compression (useful for slow networks) --delete: Mirror source (removes extra files from target) --exclude: Skip specific files/folders --dry-run: Test without actual transfer --progress: Show transfer progress --bwlimit: Limit bandwidth usage Example Configurations: bash Basic backup -avz Mirror with deletion -avz --delete Exclude temporary files -avz --exclude='.tmp' --exclude='.cache' Bandwidth limited (1MB/s) -avz --bwlimit=1000 Dry run test -avzn --delete Monitoring Execution Logs Check n8n Executions tab Review stdout for rsync details Check stderr for error messages Verification After backup: SSH to target server Check folder size: du -sh /target/folder Verify file count: find /target/folder -type f | wc -l Compare with source: ls -la /target/folder Troubleshooting Connection Issues "Connection refused" error: Verify SSH port is correct Check firewall rules Ensure SSH service is running "Permission denied" error: Verify username/password Check user has required permissions Ensure sudo works (for installation) Installation Failures "Unsupported package manager": Workflow supports: apt, yum, dnf, apk Manual installation may be required for others "sudo: password required": User needs passwordless sudo or Modify installation commands Rsync Errors "rsync error: some files/attrs were not transferred": Usually permission issues Check file ownership Review excluded files "No space left on device": Check target server storage Clean up old backups Consider compression options Notification Issues No Telegram message: Verify bot token and channel ID Check bot is admin in channel Test with curl command manually SMS not received: Check Textbelt credit balance Verify phone number format Review API key validity Best Practices Backup Strategy Test First: Always test with small datasets Schedule Wisely: Run during low-traffic periods Monitor Space: Ensure adequate storage on target Verify Backups: Regularly test restore procedures Rotate Backups: Implement retention policies Security Use Strong Passwords: Complex passwords for all accounts Limit Permissions: Use dedicated backup users Network Security: Consider VPN for internet transfers Audit Access: Log all backup operations Encrypt Sensitive Data: Consider rsync with encryption Performance Compression: Use -z for slow networks Bandwidth Limits: Prevent network saturation Incremental Backups: Rsync only transfers changes Parallel Transfers: Consider multiple workflows for different folders Off-Peak Hours: Schedule during quiet periods Advanced Configuration Multiple Backup Jobs Create separate workflows for: Different server pairs Various schedules Distinct retention policies Backup Rotation Implement versioning: bash Add timestamp to target folder targetfolder="/backup/data$(date +%Y%m%d)" Pre/Post Scripts Add nodes for: Database dumps before backup Service stops/starts Cleanup operations Verification scripts Error Handling Enhance workflow with: Retry mechanisms Fallback servers Detailed error logging Escalation procedures Maintenance Regular Tasks Daily: Check backup completion Weekly: Verify backup integrity Monthly: Test restore procedure Quarterly: Review and optimize rsync options Annually: Audit security settings Monitoring Metrics Track: Backup duration Transfer size Success/failure rate Storage utilization Network bandwidth usage Recovery Procedures Restore from Backup To restore files: bash Reverse the rsync direction rsync -avz targetserver:/backup/folder/ sourceserver:/restore/location/ Disaster Recovery Document server configurations Maintain backup access credentials Test restore procedures regularly Keep workflow exports as backup Support Resources Rsync documentation: https://rsync.samba.org/ n8n community: https://community.n8n.io/ SSH troubleshooting guides Network diagnostics tools
Monitor PKI certificates & CRLs for expiration with Telegram & SMS alerts
PKI Certificate & CRL Monitor - Auto Expiration Alert System Overview This n8n workflow provides automated monitoring of Public Key Infrastructure (PKI) components including CA certificates, Certificate Revocation Lists (CRLs), and associated web services. It extracts certificate information from the TSL (Trusted Service List) -- the Hungarian is the example list as default in the workflow -- , monitors expiration dates, and sends alerts via Telegram and SMS when critical thresholds are reached. Features Automated extraction of certificate URLs from TSL XML CA certificate expiration monitoring CRL expiration tracking Website availability monitoring with retry mechanism Multi-channel alerting (Telegram and SMS) Scheduled execution every 12 hours 17-hour warning threshold for expirations Setup Instructions Prerequisites n8n Instance: Running n8n installation with Linux environment Telegram Bot: Created via @BotFather Textbelt API Key: For SMS notifications (optional) Network Access: To reach TSL source and certificate URLs Linux Tools: OpenSSL, curl, libxml2-utils, jq (auto-installed) Configuration Steps Telegram Setup Create Telegram Bot: Open Telegram and search for @BotFather Send /newbot and follow prompts Save the bot token (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Create Alert Channel: Create a new Telegram channel for alerts Add your bot as administrator Get channel ID: Send a test message to the channel Visit: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Find "chat":{"id":-100XXXXXXXXXX} - this is your channel ID SMS Setup (Optional) Textbelt Configuration: Register at https://textbelt.com Purchase credits and obtain API key Note: Free tier allows 1 SMS/day for testing Configure Alert Nodes Update these nodes with your credentials: CRL Alert Node: Open CRL Alert --- Telegram & SMS node Replace YOUR-TELEGRAM-BOT-TOKEN with your bot token Replace YOUR-TELEGRAM-CHANNEL-ID with your channel ID Replace +36301234567 with target phone number(s) Replace YOUR-TEXTBELT-API-KEY with your Textbelt key CA Alert Node: Open CA Alert --- Telegram & SMS node Apply same replacements as above Website Down Alert Node: Open Send Website Down - Telegram & SMS node Apply same replacements as above TSL Source Configuration The workflow defaults to Hungarian TSL: URL: http://www.nmhh.hu/tl/pub/HU_TL.xml To change, edit the Collect Checking URL list node Trust list references: https://ec.europa.eu/tools/lotl/eu-lotl.xml (to find more TSL list to change the default), and https://www.etsi.org/deliver/etsits/119600119699/119615/01.02.0160/ts119615v010201p.pdf (to Technical Specification of the Trust Lists) Threshold Configuration Default warning threshold: 17 hours before expiration To modify CRL threshold: Edit nextUpdate - TimeFilter node To modify CA threshold: Edit nextUpdate - TimeFilter1 node Change value in condition: if (diffHours < 17) Activation Save all configuration changes Test with Execute With Manual Start trigger Verify alerts are received Toggle workflow to Active status for scheduled operation How to Use Automatic Operation Once activated, the workflow runs automatically: Frequency: Every 12 hours Process: Downloads TSL XML Extracts all certificate URLs Checks each URL type (CRL, CA, or other) Validates expiration dates Sends alerts for critical items Manual Execution For immediate checks: Open the workflow Click Execute With Manual Start node Click "Execute Node" Monitor execution progress Understanding Alerts CRL Expiration Alert Message Format: ALERT! with [Issuer CN] !!!CRL EXPIRATION!!! Will be under 17 hour ([Next Update Time])! Last updated: [Last Update Time] Trigger Conditions: CRL expires in less than 17 hours CRL download successful but expiration imminent CA Certificate Alert Message Format: ALERT!/EXPIRED! with [Subject CN] !!!CA EXPIRATION PROBLEM!!! The expiration time: ([Not After Date]) Last updated: ([Not Before Date]) Trigger Conditions: Certificate expires in less than 17 hours (ALERT!) Certificate already expired (EXPIRED!) Website Down Alert Message Format: ALERT! The [URL] !!!NOT AVAILABLE!!! Service outage probable! Intervention required! Trigger Conditions: Initial HTTP request fails Retry after wait period also fails HTTP status code not 200 Monitoring Dashboard Execution History Navigate to n8n Executions tab Filter by workflow name Review successful/failed runs Alert History Check Telegram channel for: Alert timestamps Affected certificates/services Expiration details Troubleshooting No Alerts Received Check Telegram Bot: Verify bot is admin in channel Test with manual message via API Confirm channel ID is correct Check Workflow Execution: Review execution logs in n8n Look for error nodes (red indicators) Verify TSL URL is accessible False Positives Verify system time is correct Check timezone settings Review threshold values Missing Certificates Some certificates may not have URLs TSL may be temporarily unavailable Check XML parsing in logs Performance Issues Slow Execution: Large TSL files take time to parse Network latency affects URL checks Consider increasing timeout values Memory Issues: Workflow processes many URLs sequentially Monitor n8n server resources Consider increasing batch intervals Advanced Configuration Modify Check Frequency Edit Execute With Scheduled Start node: Change interval type (hours/days/weeks) Adjust interval value Consider peak/off-peak scheduling Add Custom TSL Sources In Collect Checking URL list node: bash URL="https://your-tsl-source.com/tsl.xml" Customize Alert Messages Edit alert nodes to modify message templates: Add organization name Include escalation contacts Add remediation instructions Filter Certificate Types Modify URL detection patterns: Is this CRL? node: Adjust CRL detection Is this CA? node: Adjust CA detection Add new patterns as needed Adjust Retry Logic Wait B4 Retry node: Default: Immediate retry Can add delay (seconds/minutes) Useful for transient network issues Maintenance Regular Tasks Weekly: Review alert frequency Monthly: Validate phone numbers/channels Quarterly: Update TSL source URLs Annually: Review threshold values Log Management Clear old execution logs periodically Archive alert history from Telegram Document false positives for tuning Updates Keep n8n updated for security patches Monitor OpenSSL versions for compatibility Update notification service APIs as needed Security Considerations Store API keys in n8n credentials manager Use environment variables for sensitive data Restrict workflow edit access Monitor for unauthorized changes Regularly rotate API keys Use HTTPS for TSL sources when available Compliance Notes Ensure monitoring aligns with PKI policies Document alert response procedures Maintain audit trail of certificate issues Consider regulatory requirements for uptime Integration Options Connect to ticketing systems for alert tracking Add database logging for compliance Integrate with monitoring dashboards Create escalation workflows for critical alerts Best Practices Test alerts monthly to ensure delivery Maintain multiple notification channels Document response procedures for each alert type Set up redundant monitoring if critical Review and tune thresholds based on operational needs Keep contact lists updated Consider time zones for global operations