Chat with internal documents using Ollama, Supabase Vector DB & Google Drive
📚 Chat with Internal Documents (RAG AI Agent) ✅ Features Answers should given only within provided text. Chat interface powered by LLM (Ollama) Retrieval-Augmented Generation (RAG) using Supabase Vector DB Multi-format file support (PDF, Excel, Google Docs, text files) Automated file ingestion from Google Drive Real-time document update handling Embedding generation via Ollama for semantic search Memory-enabled agent using PostgreSQL Custom tools for document lookup with context-aware chat ⚙️ How It Works 📥 Document Ingestion & Vectorization Watches a Google Drive folder for new or updated files. Deletes old vector entries for the file. Uses conditional logic to extract content from PDFs, Excel, Docs, or text Summarizes and preprocesses content. (if needed) Splits and embeds the text via Ollama. Stores embeddings in Supabase Vector DB 💬 RAG Chat Agent Chat is initiated via Webhook or built-in chat interface. User input is passed to the RAG Agent. Agent queries the User_documents tool (Supabase vector store) using the Ollama model to fetch relevant content. If context is found, it answers directly. Otherwise, it can call tools or request clarification. Responses are returned to the user, with memory stored in PostgreSQL for continuity. 🛠 Supabase Database Configuration Create a Supabase project at https://supabase.com and go to the SQL editor. Create a documents table with the following schema: id - int8 content - text metadata - jsonb embedding - vector Generate an API Key
Monitor dynamic website changes with Firecrawl, Sheets & Gmail alerts
🕸️ Dynamic Website Change Monitor with Smart Email Alerts Never miss important website updates again! This workflow automatically tracks changes on dynamic websites (think React apps, JavaScript-heavy sites) and sends you instant email notifications when something changes. Perfect for keeping tabs on competitors, monitoring product updates, or staying on top of important announcements. ✨ What makes this special? 🚀 Handles Dynamic Websites: Uses Firecrawl API to scrape JavaScript-rendered content that basic scrapers can't touch 📧 Smart Email Alerts: Only sends notifications when content actually changes (no spam!) 📊 Historical Tracking: Keeps a complete log of all changes in Google Sheets 🛡️ Bulletproof: Continues working even if one part fails ⚡ Ready to Deploy: Webhook-triggered, perfect for cron jobs or external schedulers 🎯 Perfect for monitoring: Competitor pricing pages Job board postings Product availability updates News sites for breaking stories API documentation changes Terms of service updates 🛠️ What you'll need to get started: API Accounts & Keys: Firecrawl Account 🔥 Sign up at firecrawl.dev Grab your API key from the dashboard Create a "Bearer Auth" credential in n8n Google Cloud Setup ☁️ Enable Google Sheets API Enable Gmail API Set up OAuth2 credentials Add both as credentials in n8n Google Sheets Document 📋 Create a new spreadsheet Add two tabs: "Log" and "comparison" Follow the structure outlined in the workflow notes 🚀 How it works: Webhook receives trigger → Starts the monitoring process Firecrawl scrapes website → Gets fresh content (even JavaScript-rendered!) Smart comparison → Checks against previously stored content Change detected? → If yes, send email + log everything Update storage → Prepares for next monitoring cycle ⚙️ Setup Steps: Import this workflow into your n8n instance Configure credentials for Firecrawl, Google Sheets, and Gmail Update the target URL in the Firecrawl node Set your email address in the Gmail node Create your Google Sheets with the required structure Test it manually first, then activate! 🎨 Customize it your way: Target any website by updating the URL Change email templates to match your style Adjust monitoring frequency with external cron jobs Switch between markdown/HTML extraction formats Fine-tune change detection sensitivity 🔧 Troubleshooting: Firecrawl errors? Check your API key and rate limits Google Sheets issues? Verify OAuth permissions and sheet structure Email not sending? Check Gmail API quotas and spam folders Webhook problems? Make sure the workflow is activated Ready to never miss another website change? Let's get this automation running! 🎉
Proxmox system monitor - VM status, host resources & temperature alerts via Telegram
Setup Instructions Overview This n8n workflow monitors your Proxmox VE server and sends automated reports to Telegram every 15 minutes. It tracks VM status, host resource usage, temperature sensors, and detects recently stopped VMs. Prerequisites Required Software n8n instance (self-hosted or cloud) Proxmox VE server with API access Telegram account with bot created via BotFather lm-sensors package installed on Proxmox host Required Access Proxmox admin credentials (username and password) SSH access to Proxmox server Telegram Bot API token Telegram Chat ID Installation Steps Step 1: Install Temperature Sensors on Proxmox SSH into your Proxmox server and run: bash apt-get update apt-get install -y lm-sensors sensors-detect Press ENTER to accept default answers during sensors-detect setup. Test that sensors work: bash sensors | grep -E 'Package|Core' Step 2: Create Telegram Bot Open Telegram and search for BotFather Send /newbot command Follow prompts to create your bot Save the API token provided Get your Chat ID by sending a message to your bot, then visiting: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Look for "chat":{"id": YOURCHATID in the response Step 3: Configure n8n Credentials SSH Password Credential In n8n, go to Credentials menu Create new credential: SSH Password Enter: Host: Your Proxmox IP address Port: 22 Username: root (or your admin user) Password: Your Proxmox password Telegram API Credential Create new credential: Telegram API Enter the Bot Token from BotFather Step 4: Import and Configure Workflow Import the JSON workflow into n8n Open the "Set Variables" node Update the following values: PROXMOX_IP: Your Proxmox server IP address PROXMOX_PORT: API port (default: 8006) PROXMOX_NODE: Node name (default: pve) TELEGRAMCHATID: Your Telegram chat ID PROXMOX_USER: Proxmox username with realm (e.g., root@pam) PROXMOX_PASSWORD: Proxmox password Connect credentials: SSH - Get Sensors node: Select your SSH credential Send Telegram Report node: Select your Telegram credential Save the workflow Activate the workflow Configuration Options Adjust Monitoring Interval Edit the "Schedule Every 15min" node: Change minutesInterval value to desired interval (in minutes) Recommended: 5-30 minutes Adjust Recently Stopped VM Detection Window Edit the "Process Data" node: Find line: const fifteenMinutesAgo = now - 900; Change 900 to desired seconds (900 = 15 minutes) Modify Temperature Warning Threshold The workflow uses the "high" threshold defined by sensors. To manually set threshold, edit "Process Data" node: Modify the temperature parsing logic Change comparison: if (current >= high) to use custom value Testing Test Individual Components Execute "Set Variables" node manually - verify output Execute "Proxmox Login" node - check for valid ticket Execute "API - VM List" - confirm VM data received Execute complete workflow - check Telegram for message Troubleshooting Login fails: Verify PROXMOX_USER format includes realm (e.g., root@pam) Check password is correct Ensure allowUnauthorizedCerts is enabled for self-signed certificates No temperature data: Verify lm-sensors is installed on Proxmox Run sensors command manually via SSH Check SSH credentials are correct Recently stopped VMs not detected: Check task log API endpoint returns data Verify VM was stopped within detection window Ensure task types qmstop or qmshutdown are logged Telegram not receiving messages: Verify bot token is correct Confirm chat ID is accurate Check bot was started (send /start to bot) Verify parse_mode is set to HTML in Telegram node --- How It Works Workflow Architecture The workflow executes in a sequential chain of nodes that gather data from multiple sources, process it, and deliver a formatted report. Execution Flow Schedule Trigger (15min) Set Variables Proxmox Login (get authentication ticket) Prepare Auth (prepare credentials for API calls) API - VM List (get all VMs and their status) API - Node Tasks (get recent task log) API - Node Status (get host CPU, memory, uptime) SSH - Get Sensors (get temperature data) Process Data (analyze and structure all data) Generate Formatted Message (create Telegram message) Send Telegram Report (deliver via Telegram) Data Collection VM Information (Proxmox API) Endpoint: /api2/json/nodes/{node}/qemu Retrieves: Total VM count Running VM count Stopped VM count VM names and IDs Task Log (Proxmox API) Endpoint: /api2/json/nodes/{node}/tasks?limit=100 Retrieves recent tasks to detect: qmstop operations (VM stop commands) qmshutdown operations (VM shutdown commands) Task timestamps Task status Host Status (Proxmox API) Endpoint: /api2/json/nodes/{node}/status Retrieves: CPU usage percentage Memory total and used (in GB) System uptime (in seconds) Temperature Data (SSH) Command: sensors | grep -E 'Package|Core' Retrieves: CPU package temperature Individual core temperatures High and critical thresholds Data Processing VM Status Analysis Counts total, running, and stopped VMs Queries task log for stop/shutdown operations Filters tasks within 15-minute window Extracts VM ID from task UPID string Matches VM ID to VM name from VM list Calculates time elapsed since stop operation Temperature Intelligence The workflow implements smart temperature reporting: Normal Operation (all temps below high threshold): Calculates average temperature across all cores Displays min, max, and average values Example: "Average: 47.5 C (Min: 44.0 C, Max: 52.0 C)" Warning State (any temp at or above high threshold): Displays all temperature readings in detail Shows full sensor output with thresholds Changes section title to "Temperature Warning" Adds fire emoji indicator Resource Calculation CPU Usage: API returns decimal (0.0 to 1.0) Converted to percentage: cpu * 100 Memory: API returns bytes Converted to GB: bytes / (1024^3) Calculates percentage: (used / total) * 100 Uptime: API returns seconds Converted to days and hours: days = seconds / 86400, hours = (seconds % 86400) / 3600 Report Generation Message Structure The Telegram message uses HTML formatting for structure: Header Section Report title Generation timestamp Virtual Machines Section Total VM count Running VMs with checkmark Stopped VMs with stop sign Recently stopped count with warning Detailed list if VMs stopped in last 15 minutes Host Resources Section CPU usage percentage Memory used/total with percentage Host uptime in days and hours Temperature Section Smart display (summary or detailed) Warning indicator if thresholds exceeded Monospace formatting for sensor output HTML Formatting Features Bold tags for headers and labels Italic for timestamps Code blocks for temperature data Unicode separators for visual structure Emoji indicators for status (checkmark, stop, warning, fire) Security Considerations Credential Storage Passwords stored in n8n Set node (encrypted in database) Alternative: Use n8n environment variables Recommendation: Use Proxmox API tokens instead of passwords API Communication HTTPS with self-signed certificate acceptance Authentication via session tickets (15-minute validity) CSRF token validation for API requests SSH Access Password-based authentication (can use key-based) Commands limited to read-only operations No privilege escalation required Performance Impact API Load 3 API calls per execution (VM list, tasks, status) Lightweight endpoints with minimal data 15-minute interval reduces server load Execution Time Typical workflow execution: 5-10 seconds Login: 1-2 seconds API calls: 2-3 seconds SSH command: 1-2 seconds Processing: less than 1 second Resource Usage Minimal CPU impact on Proxmox Small memory footprint Negligible network bandwidth Extensibility Adding Additional Metrics To monitor additional data points: Add new API call node after "Prepare Auth" Update "Process Data" node to include new data Modify "Generate Formatted Message" for display Integration with Other Services The workflow can be extended to: Send to Discord, Slack, or email Write to database or log file Trigger alerts based on thresholds Generate charts or graphs Multi-Node Monitoring To monitor multiple Proxmox nodes: Duplicate API call nodes Update node names in URLs Merge data in processing step Generate combined report
Clone and change your voice 🤖🎙️with Elevenlabs and Telegram
This workflow creates a voice AI assistant accessible via Telegram that leverages ElevenLabs* powerful voice synthesis technology. Users can either clone their own voice or transform their voice using pre-existing voice models, all through simple voice messages sent to a Telegram bot. *ONLY FOR STARTER, CREATOR, PRO PLAN This workflow allows users to: Clone their voice by sending a voice message to a Telegram bot (creates a new voice profile on ElevenLabs) Change their voice to a cloned voice and save the output to Google Drive --- For Best Results Important Considerations for Best Results: For optimal voice cloning via Telegram voice messages: Recording Quality & Environment Record in a quiet room with minimal echo and background noise Use a consistent microphone position (10-15cm from mouth) Ensure clear audio without distortion or clipping Content Selection & Variety Send 1 voice messages totaling 5-10 minutes of speech Include diverse vocal sounds, tones, and natural speaking cadence Use complete sentences rather than isolated words Audio Consistency Maintain consistent volume, tone, and distance from microphone Avoid interruptions, laughter, coughs, or background voices Speak naturally without artificial effects or filters Technical Preparation Ensure Telegram isn't overly compressing audio (use HQ recording) Record all messages in the same session with same conditions Include both neutral speech and varied emotional expressions --- How it works Trigger The workflow starts with a Telegram trigger that listens for incoming messages (text, voice notes, or photos). Authorization check A Code node checks whether the sender’s Telegram user ID matches your predefined ID. If not, the process stops. Message routing A Switch node routes the message based on its type: Text → Not processed further in this flow. Voice message → Sent to the “Get audio” node to retrieve the audio file from Telegram. Photo → Not processed further in this flow. Two main options From the “Get audio” node, the workflow splits into two possible paths: Option 1 – Clone voice The audio file is sent to ElevenLabs via an HTTP request to create a new cloned voice. The voice ID is returned and can be saved for later use. Option 2 – Voice changer The audio is sent to ElevenLabs for speech-to-speech conversion using a pre-existing cloned voice (voice ID must be set in the node parameters). The resulting audio is saved to Google Drive. Output Cloned voice ID (for Option 1). Converted audio file uploaded to Google Drive (for Option 2). --- Set up steps Telegram bot setup Create a bot via BotFather and obtain the API token. Set up the Telegram Trigger node with your bot credentials. Authorization configuration In the “Sanitaze” Code node, replace XXX with your Telegram user ID to restrict access. ElevenLabs API setup Get an API key from ElevenLabs. Configure the HTTP Request nodes (“Create Cloned Voice” and “Generate cloned audio”) with: API key in the Xi-Api-Key header. Appropriate endpoint URLs (including voice ID for speech-to-speech). Google Drive setup (for Option 2) Set up Google Drive OAuth2 credentials in n8n. Specify the target folder ID in the “Upload file” node. Voice ID configuration For voice cloning: The voice name can be customized in the “Create Cloned Voice” node. For voice changing: Replace XXX in the “Generate cloned audio” node URL with your ElevenLabs voice ID. Test the workflow Activate the workflow. Send a voice note from your authorized Telegram account to trigger cloning or voice conversion. --- 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. [](https://youtube.com/@n3witalia) --- Need help customizing? Contact me for consulting and support or add me on Linkedin.