8 templates found
Category:
Author:
Sort:

Create a new card in Trello

No description available.

tanaypantBy tanaypant
3347

Notify on new emails with invoices in Slack

This workflow checks for new emails in a mailbox and if the email body contains the word "invoice" it will send the attachment to Mindee. It then posts a message to Slack to let the team know a payment needs to be made, If the value of the invoice is over 1000 it will also email the finance manager. To use this workflow you will need to configure the IMAP node to select the correct mailbox to use then configure the Mindee node to use your credentials. Once that is done the Send Email node will need to be configured to use the correct mail server and to send to the correct people, The last thing to configure is the Slack node this will need your Slack credentials and the channel you want to post the message to.

JonathanBy Jonathan
2677

Send a notification to Slack when a new high-quality lead is added to Hubspot

Use Case When tracking your contacts and leads in Hubspot CRM, every new contact might be a potential customer. To guarantee that you're keeping the overview you'd normally need to look at every new lead that is coming in manually to identify high-quality leads to prioritize their engagement and optimize the sales process. This workflow saves the work and does it for you. What this workflow does The workflow runs every 5 minutes. On every run, it checks the Hubspot CRM for contacts that were added since the last check. It then checks if they meet certain criteria (in this case if they are making +5m annual revenue) and alerts you in Slack for every match. Setup Add Hubspot, and Slack credentials. Click on Test workflow. How to adjust this workflow to your needs Change the schedule interval Adjust the criteria to send alerts

Ricardo EspinozaasBy Ricardo Espinozaas
2181

Website & API health monitoring system with HTTP status validation

Website & API Health Monitoring System with HTTP Status Validation How it works Performs HTTP health checks on websites and APIs with automatic health status validation Checks HTTP status codes and analyzes JSON responses for common health indicators Returns detailed status information including response times and health status Implements conditional logic to handle different response scenarios Perfect for monitoring dashboards, alerts, and automated health checks Set up steps Deploy the workflow and activate it Get the webhook URL from the trigger node Configure your monitoring system to call the webhook endpoint Send POST requests with target URLs for health monitoring Receive JSON responses with health status, HTTP codes, and timestamps Usage Send POST requests to the webhook URL with target URL parameter Optionally configure timeout and status expectations in request body Receive JSON responses with health status, HTTP codes, and timestamps Perfect for monitoring dashboards, alerts, and automated health checks Use with external monitoring tools like Nagios, Zabbix, or custom dashboards Set up scheduled monitoring calls for continuous health validation Example request: Send POST with {"url": "https://your-site.com", "timeoutMs": 5000} Success response returns: {"ok": true, "statusCode": 200, "healthStatus": "ok"} Failure response returns: {"ok": false, "error": "Health check failed", "statusCode": 503} Benefits Proactive monitoring to identify issues before they impact users Detailed diagnostics with comprehensive health data for troubleshooting Integration ready - works with existing monitoring and alerting systems Customizable timeout settings, expected status codes, and health indicators Scalable solution to monitor multiple services with single workflow endpoint Use Cases E-commerce platforms: Monitor payment APIs, inventory systems, user authentication Microservices: Health validation for distributed service architectures API gateways: Endpoint monitoring with response time validation Database connections: Track connectivity and performance metrics Third-party integrations: Monitor external API dependencies and SLA compliance Target Audience DevOps Engineers implementing production monitoring System Administrators managing server uptime Site Reliability Engineers building monitoring systems Development Teams tracking API health in staging/production IT Support Teams for proactive issue detection

Ibrahim Emre POLATBy Ibrahim Emre POLAT
892

🛠️ Trello tool MCP server 💪 all 41 operations

Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all Trello Tool operations to AI agents. Zero configuration needed - all 41 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Trello Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Trello Tool tool with full error handling 📋 Available Operations (41 total) Every possible Trello Tool operation is included: 🔧 Attachment (4 operations) • Create an attachment • Delete an attachment • Get an attachment • Get many attachments 🔧 Board (4 operations) • Create a board • Delete a board • Get a board • Update a board 🔧 Boardmember (4 operations) • Add a board member • Get many board members • Invite a board member • Remove a board member 🔧 Card (4 operations) • Create a card • Delete a card • Get a card • Update a card 🔧 Cardcomment (3 operations) • Create a card comment • Delete a card comment • Update a card comment 🔧 Checklist (9 operations) • Create a checklist • Create checklist item • Delete a checklist • Delete a checklist item • Get a checklist • Get checklist items • Get completed checklist items • Get many checklists • Update a checklist item 🔧 Label (7 operations) • Add a label to a card • Create a label • Delete a label • Get a label • Get many labels • Remove a label from a card • Update a label 📝 List (6 operations) • Archive/unarchive a list • Create a list • Get a list • Get all cards in a list • Get many lists • Update a list 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Trello Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Trello Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.

David AshbyBy David Ashby
190

Scheduled n8n workflow backups to Google Drive using n8n API

Overview This template provides an automatic backup solution for all your n8n workflows, saving them directly to Google Drive. It’s designed for freelancers, agencies, and businesses that want to keep their automations safe, versioned, and always recoverable. Why Backups Matter Disaster recovery – Restore workflows quickly if your instance fails. Version control – Track workflow changes over time. Collaboration – Share workflow JSON files easily with teammates. How it Works Fetches the complete list of workflows from your n8n instance via API. Downloads each workflow in JSON format. Converts the data into a file with a unique name (workflow name + ID). Uploads all files to a chosen Google Drive folder. Can be run manually or via an automatic schedule (daily, weekly, etc.). Requirements An active n8n instance with API access enabled API credentials for n8n (API key or basic auth) A Google account with access to Google Drive Google Drive credentials connected in n8n Setup Instructions Connect your n8n API (authenticate your instance). Connect your Google Drive account. Select or create the Drive folder where backups will be stored. Customize the Schedule Trigger to define backup frequency. Run once to confirm files are stored correctly. Customization Options Frequency → Set daily, weekly, or monthly backups. File Naming → Adjust filename expression (e.g., {{workflowName}}-{{workflowId}}-{{date}}.json). Folder Location → Store backups in separate Google Drive folders per project or client. Target Audience This template is ideal for: Freelancers managing multiple client automations. Agencies delivering automation services. Teams that rely on n8n for mission-critical workflows. It reduces risk, saves time, and ensures you never lose your work. ⏱ Estimated setup time: 5–10 minutes.

Tony CienciaBy Tony Ciencia
183

Automated rsync backup with password auth & alert system

Automated Rsync Backup with Password Auth & Alert System Overview This n8n workflow provides automated rsync backup capabilities between servers using password authentication. It automatically installs required dependencies, performs the backup operation from a source server to a target server, and sends status notifications via Telegram and SMS. Features Password-based SSH authentication (no key management required) Automatic dependency installation (sshpass, rsync) Cross-platform support (Ubuntu/Debian, RHEL/CentOS, Alpine) Source-to-target backup execution Multi-channel notifications (Telegram and SMS) Detailed success/failure reporting Manual trigger for on-demand backups Setup Instructions Prerequisites n8n Instance: Running n8n with Linux environment Server Access: SSH access to both source and target servers Telegram Bot: Created via @BotFather (optional) Textbelt API Key: For SMS notifications (optional) Network: Connectivity between n8n, source, and target servers Server Requirements Source Server: SSH access enabled User with sudo privileges (for package installation) Read access to source folder Target Server: SSH access enabled Write access to target folder Sufficient storage space Configuration Steps Server Parameters Configuration Open the Server Parameters node and configure: Source Server Settings: source_host: IP address or hostname of source server source_port: SSH port (typically 22) source_user: Username for source server source_password: Password for source user source_folder: Full path to folder to backup (e.g., /home/user/data) Target Server Settings: target_host: IP address or hostname of target server target_port: SSH port (typically 22) target_user: Username for target server target_password: Password for target user target_folder: Full path to destination folder (e.g., /backup/data) Rsync Options: rsync_options: Default is -avz --delete -a: Archive mode (preserves permissions, timestamps, etc.) -v: Verbose output -z: Compression during transfer --delete: Remove files from target that don't exist in source Notification Setup (Optional) Telegram Configuration: Create bot via @BotFather on Telegram Get bot token (format: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz) Create notification channel Add bot as administrator Get channel ID: Send test message to channel Visit: https://api.telegram.org/bot<YOURBOTTOKEN>/getUpdates Find "chat":{"id":-100XXXXXXXXXX} SMS Configuration: Register at https://textbelt.com Purchase credits Obtain API key Update Notification Node: Edit Process Finish Report --- Telegram & SMS node: Replace YOUR-TELEGRAM-BOT-TOKEN with bot token Replace YOUR-TELEGRAM-CHANNEL-ID with channel ID Replace +36301234567 with target phone number(s) Replace YOUR-TEXTBELT-API-KEY with Textbelt key Security Considerations Password Storage: Consider using n8n credentials for sensitive passwords Avoid hardcoding passwords in workflow Use environment variables where possible SSH Security: Workflow uses StrictHostKeyChecking=no for automation Consider adding known hosts manually for production Review firewall rules between servers Testing Start with small test folder Verify network connectivity: ping sourcehost and ping targethost Test SSH access manually first Run workflow with test data Verify backup completion on target server How to Use Automatic Operation Once activated, the workflow runs automatically: Frequency: Every days midnight Manual Execution Open the workflow in n8n Click on Manual Trigger node Click "Execute Workflow" Monitor execution progress Scheduled Execution To automate backups: Replace Manual Trigger with Schedule Trigger node Configure schedule (e.g., daily at 2 AM) Save and activate workflow Workflow Process Step 1: Dependency Check The workflow automatically: Checks if sshpass is installed locally Installs if missing (supports apt, yum, dnf, apk) Checks sshpass on source server Installs on source if needed (with sudo) Step 2: Backup Execution Connects to source server via SSH Executes rsync command from source to target Uses password authentication for both connections Transfers data directly between servers (not through n8n) Step 3: Status Reporting Success Message Format: [Timestamp] -- SUCCESS :: sourcehost:/path -> targethost:/path :: [rsync output] Failure Message Format: [Timestamp] -- ERROR :: sourcehost -> targethost :: [exit code] -- [error message] Rsync Options Guide Common Options: -a: Archive mode (recommended) -v: Verbose output for monitoring -z: Compression (useful for slow networks) --delete: Mirror source (removes extra files from target) --exclude: Skip specific files/folders --dry-run: Test without actual transfer --progress: Show transfer progress --bwlimit: Limit bandwidth usage Example Configurations: bash Basic backup -avz Mirror with deletion -avz --delete Exclude temporary files -avz --exclude='.tmp' --exclude='.cache' Bandwidth limited (1MB/s) -avz --bwlimit=1000 Dry run test -avzn --delete Monitoring Execution Logs Check n8n Executions tab Review stdout for rsync details Check stderr for error messages Verification After backup: SSH to target server Check folder size: du -sh /target/folder Verify file count: find /target/folder -type f | wc -l Compare with source: ls -la /target/folder Troubleshooting Connection Issues "Connection refused" error: Verify SSH port is correct Check firewall rules Ensure SSH service is running "Permission denied" error: Verify username/password Check user has required permissions Ensure sudo works (for installation) Installation Failures "Unsupported package manager": Workflow supports: apt, yum, dnf, apk Manual installation may be required for others "sudo: password required": User needs passwordless sudo or Modify installation commands Rsync Errors "rsync error: some files/attrs were not transferred": Usually permission issues Check file ownership Review excluded files "No space left on device": Check target server storage Clean up old backups Consider compression options Notification Issues No Telegram message: Verify bot token and channel ID Check bot is admin in channel Test with curl command manually SMS not received: Check Textbelt credit balance Verify phone number format Review API key validity Best Practices Backup Strategy Test First: Always test with small datasets Schedule Wisely: Run during low-traffic periods Monitor Space: Ensure adequate storage on target Verify Backups: Regularly test restore procedures Rotate Backups: Implement retention policies Security Use Strong Passwords: Complex passwords for all accounts Limit Permissions: Use dedicated backup users Network Security: Consider VPN for internet transfers Audit Access: Log all backup operations Encrypt Sensitive Data: Consider rsync with encryption Performance Compression: Use -z for slow networks Bandwidth Limits: Prevent network saturation Incremental Backups: Rsync only transfers changes Parallel Transfers: Consider multiple workflows for different folders Off-Peak Hours: Schedule during quiet periods Advanced Configuration Multiple Backup Jobs Create separate workflows for: Different server pairs Various schedules Distinct retention policies Backup Rotation Implement versioning: bash Add timestamp to target folder targetfolder="/backup/data$(date +%Y%m%d)" Pre/Post Scripts Add nodes for: Database dumps before backup Service stops/starts Cleanup operations Verification scripts Error Handling Enhance workflow with: Retry mechanisms Fallback servers Detailed error logging Escalation procedures Maintenance Regular Tasks Daily: Check backup completion Weekly: Verify backup integrity Monthly: Test restore procedure Quarterly: Review and optimize rsync options Annually: Audit security settings Monitoring Metrics Track: Backup duration Transfer size Success/failure rate Storage utilization Network bandwidth usage Recovery Procedures Restore from Backup To restore files: bash Reverse the rsync direction rsync -avz targetserver:/backup/folder/ sourceserver:/restore/location/ Disaster Recovery Document server configurations Maintain backup access credentials Test restore procedures regularly Keep workflow exports as backup Support Resources Rsync documentation: https://rsync.samba.org/ n8n community: https://community.n8n.io/ SSH troubleshooting guides Network diagnostics tools

Vigh SandorBy Vigh Sandor
99

Healthcare policy monitoring with ScrapeGraphAI, Pipedrive and Matrix alerts

Medical Research Tracker with Matrix and Pipedrive ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically monitors selected government and healthcare-policy websites, extracts newly published or updated policy documents, logs them as deals in a Pipedrive pipeline, and announces critical changes in a Matrix room. It gives healthcare administrators and policy analysts a near real-time view of policy developments without manual web checks. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Active Pipedrive account with at least one pipeline Matrix account & accessible room for notifications Basic knowledge of n8n credential setup Required Credentials ScrapeGraphAI API Key – Enables the scraping engine Pipedrive OAuth2 / API Token – Creates & updates deals Matrix Credentials – Homeserver URL, user, access token (or password) Specific Setup Requirements | Variable | Description | Example | |----------|-------------|---------| | POLICY_SITES | Comma-separated list of URLs to scrape | https://health.gov/policies,https://who.int/proposals | | PDPIPELINEID | Pipedrive pipeline where deals are created | 5 | | PDSTAGEID_ALERT | Stage ID for “Review Needed” | 17 | | MATRIXROOMID | Room to send alerts (incl. leading !) | !policy:matrix.org | Edit the initial Set node to provide these values before running. How it works This workflow automatically monitors selected government and healthcare-policy websites, extracts newly published or updated policy documents, logs them as deals in a Pipedrive pipeline, and announces critical changes in a Matrix room. It gives healthcare administrators and policy analysts a near real-time view of policy developments without manual web checks. Key Steps: Scheduled Trigger: Runs every 6 hours (configurable) to start the monitoring cycle. Code (URL List Builder): Generates an array from POLICY_SITES for downstream batching. SplitInBatches: Iterates through each policy URL individually. ScrapeGraphAI: Scrapes page titles, publication dates, and summary paragraphs. If (New vs Existing): Compares scraped hash with last run; continues only for fresh content. Merge (Aggregate Results): Collects all “new” policies into a single payload. Set (Deal Formatter): Maps scraped data to Pipedrive deal fields. Pipedrive Node: Creates or updates a deal per policy item. Matrix Node: Posts a formatted alert message in the specified Matrix room. Set up steps Setup Time: 15-20 minutes Install Community Node – In n8n, go to Settings → Community Nodes → Install and search for ScrapeGraphAI. Add Credentials – Create New credentials for ScrapeGraphAI, Pipedrive, and Matrix under Credentials. Configure Environment Variables – Open the Set (Initial Config) node and replace placeholders (POLICYSITES, PDPIPELINE_ID, etc.) with your values. Review Schedule – Double-click the Schedule Trigger node to adjust the interval if needed. Activate Workflow – Click Activate. The workflow will run at the next scheduled interval. Verify Outputs – Check Pipedrive for new deals and the Matrix room for alert messages after the first run. Node Descriptions Core Workflow Nodes: stickyNote – Provides an at-a-glance description of the workflow logic directly on the canvas. scheduleTrigger – Fires the workflow periodically (default 6 hours). code (URL List Builder) – Splits the POLICY_SITES variable into an array. splitInBatches – Ensures each URL is processed individually to avoid timeouts. scrapegraphAi – Parses HTML and extracts policy metadata using XPath/CSS selectors. if (New vs Existing) – Uses hashing to ignore unchanged pages. merge – Combines all new items so they can be processed in bulk. set (Deal Formatter) – Maps scraped fields to Pipedrive deal properties. matrix – Sends formatted messages to a Matrix room for team visibility. pipedrive – Creates or updates deals representing each policy update. Data Flow: scheduleTrigger → code → splitInBatches → scrapegraphAi → if → merge → set → pipedrive → matrix Customization Examples Add another data field (e.g., policy author) javascript // Inside ScrapeGraphAI node → Selectors { "title": "//h1/text()", "date": "//time/@datetime", "summary": "//p[1]/text()", "author": "//span[@class='author']/text()" // new line } Switch notifications from Matrix to Email javascript // Replace Matrix node with “Send Email” { "to": "policy-team@example.com", "subject": "New Healthcare Policy Detected: {{$json.title}}", "text": "Summary:\n{{$json.summary}}\n\nRead more at {{$json.url}}" } Data Output Format The workflow outputs structured JSON data for each new policy article: json { "title": "Affordable Care Expansion Act – 2024", "url": "https://health.gov/policies/acea-2024", "date": "2024-06-14T09:00:00Z", "summary": "Proposes expansion of coverage to rural areas...", "source": "health.gov", "hash": "2d6f1c8e3b..." } Troubleshooting Common Issues ScrapeGraphAI returns empty objects – Verify selectors match the current HTML structure; inspect the site with developer tools and update the node configuration. Duplicate deals appear in Pipedrive – Ensure the “Find or Create” option is enabled in the Pipedrive node, using the page hash or url as a unique key. Performance Tips Limit POLICY_SITES to under 50 URLs per run to avoid hitting rate limits. Increase Schedule Trigger interval if you notice ScrapeGraphAI rate-limiting. Pro Tips: Store historical scraped data in a database node for long-term audit trails. Use the n8n Workflow Executions page to replay failed runs without waiting for the next schedule. Add an Error Trigger node to emit alerts if scraping or API calls fail.

vinci-king-01By vinci-king-01
27
All templates loaded