Real-time stock monitor with smart alerts for Indian & US markets
Monitor Indian (NSE/BSE) and US stock markets with intelligent price alerts, cooldown periods, and multi-channel notifications (Email + Telegram). Automatically tracks price movements and sends alerts when stocks cross predefined upper/lower limits. Perfect for day traders, investors, and portfolio managers who need instant notifications for price breakouts and breakdowns. How It Works Market Hours Trigger - Runs every 2 minutes during market hours Read Stock Watchlist - Fetches your stock list from Google Sheets Parse Watchlist Data - Processes stock symbols and alert parameters Fetch Live Stock Price - Gets real-time prices from Twelve Data API Smart Alert Logic - Intelligent price checking with cooldown periods Check Alert Conditions - Validates if alerts should be triggered Send Email Alert - Sends detailed email notifications Send Telegram Alert - Instant mobile notifications Update Alert History - Records alert timestamps in Google Sheets Alert Status Check - Monitors workflow success/failure Success/Error Notifications - Admin notifications for monitoring Key Features: Smart Cooldown: Prevents alert spam Multi-Market: Supports Indian & US stocks Dual Alerts: Email + Telegram notifications Auto-Update: Tracks last alert times Error Handling: Built-in failure notifications Setup Requirements: Google Sheets Setup: Create a Google Sheet with these columns (in exact order): A: symbol (e.g., TCS, AAPL, RELIANCE.BSE) B: upper_limit (e.g., 4000) C: lower_limit (e.g., 3600) D: direction (both/above/below) E: cooldown_minutes (e.g., 15) F: lastalertprice (auto-updated) G: lastalerttime (auto-updated) API Keys & IDs to Replace: YOURGOOGLESHEETIDHERE - Replace with your Google Sheet ID YOURTWELVEDATAAPIKEY - Get free API key from twelvedata.com YOURTELEGRAMCHAT_ID - Your Telegram chat ID (optional) your-email@gmail.com - Your sender email alert-recipient@gmail.com - Alert recipient email Stock Symbol Format: US Stocks: Use simple symbols like AAPL, TSLA, MSFT Indian Stocks: Use .BSE or .NSE suffix like TCS.NSE, RELIANCE.BSE Credentials Setup in n8n: Google Sheets: Service Account credentials Email: SMTP credentials Telegram: Bot token (optional) Example Google Sheet Data: symbol upperlimit lowerlimit direction cooldown_minutes TCS.NSE 4000 3600 both 15 AAPL 180 160 both 10 RELIANCE.BSE 2800 2600 above 20 Output Example: Alert: TCS crossed the upper limit. Current Price: ₹4100, Upper Limit: ₹4000.
Export n8n Cloud execution data to CSV
Overview This template helps n8n cloud plan users execute all executions to a CSV for easy data analysis. Identify what workflows are generating the most executions or could be optimized. How this workflow works Click "Test Workflow" to manually execute the workflow Open the "Convert to CSV" node to access the binary data of the CSV file Download the CSV file Nodes included: n8n node Convert to File No Operation, do nothing - replace with another Set up steps Import the workflow to your workspace Add your n8n API credential Benefits of Exporting n8n Cloud Executions to CSV Exporting n8n Cloud executions to CSV offers significant advantages for enhancing workflow management and data analysis capabilities. Here are three key benefits: Enhanced Data Analysis: Comprehensive Insights: Exporting execution data allows for in-depth analysis of workflow performance, helping identify bottlenecks and optimize processes. Custom Reporting: CSV files can be easily imported into various data analysis tools (e.g., Excel, Google Sheets, or BI software) to create custom reports and visualizations tailored to specific business needs. Improved Workflow Monitoring: Historical Data Review: Accessing historical execution data enables users to track workflow changes and their impacts over time, facilitating better decision-making. Error Tracking and Debugging: By reviewing execution logs, users can quickly identify and address errors or failures, ensuring smoother and more reliable workflow operations. Regulatory Compliance and Auditing: Audit Trails: Keeping a record of all executions provides a clear audit trail, essential for regulatory compliance and internal audits. Data Retention: Exported data ensures that execution records are preserved according to organizational data retention policies, safeguarding against data loss. By leveraging the capabilities of CSV exports, users can gain valuable insights, streamline workflow management, and ensure robust data handling practices, ultimately driving better performance and efficiency in their n8n Cloud operations.
Detect hallucinations using specialised Ollama model bespoke-minicheck
Fact-Checking Workflow Documentation Overview This workflow is designed for automated fact-checking of texts. It uses AI models to compare a given text with a list of facts and identify potential discrepancies or hallucinations. Components Input The workflow can be initiated in two ways: a) Manually via the "When clicking 'Test workflow'" trigger b) By calling from another workflow via the "When Executed by Another Workflow" trigger Required inputs: facts: A list of verified facts text: The text to be checked Text Preparation The "Code" node splits the input text into individual sentences Takes into account date specifications and list elements Fact Checking Each sentence is individually compared with the given facts Uses the "bespoke-minicheck" Ollama model for verification The model responds with "Yes" or "No" for each sentence Filtering and Aggregation Sentences marked as "No" (not fact-based) are filtered The filtered results are aggregated Summary A larger language model (Qwen2.5) creates a summary of the results The summary contains: Number of incorrect factual statements List of incorrect statements Final assessment of the article's accuracy Usage Ensure the "bespoke-minicheck" model is installed in Ollama (ollama pull bespoke-minicheck) Prepare a list of verified facts Enter the text to be checked Start the workflow The results are output as a structured summary Notes The workflow ignores small talk and focuses on verifiable factual statements Accuracy depends on the quality of the provided facts and the performance of the AI models Customization Options The summarization function can be adjusted or removed to return only the raw data of the issues found The AI models used can be exchanged if needed This workflow provides an efficient method for automated fact-checking and can be easily integrated into larger systems or editorial workflows.
Manage Appian tasks with Ollama Qwen LLM and Postgres memory
This workflow is a simple example of using n8n as an AI chat interface into Appian. It connects a local LLM, persistent memory, and API tools to demonstrate how an agent can interact with Appian tasks. What this workflow does Chat interface: Accepts user input through a webhook or chat trigger Local LLM (Ollama): Runs on qwen2.5:7b with an 8k context window Conversation memory: Stores chat history in Postgres, keyed by sessionId AI Agent node: Handles reasoning, follows system rules (helpful assistant persona, date formatting, iteration limits), and decides when to call tools Appian integration tools: List Tasks: Fetches a user’s tasks from Appian Create Task: Submits data for a new task in Appian (title, description, hours, cost) How it works A user sends a chat message The workflow normalizes fields such as text, username, and sessionId The AI Agent processes the message using Ollama and Postgres memory If the user asks about tasks, the agent calls the Appian APIs The result, either a task list or confirmation of a new task, is returned through the webhook Why this is useful Demonstrates how to build a basic Appian connector in n8n with an AI chat front end Shows how an LLM can decide when to call Appian APIs to list or create tasks Provides a pattern that can be extended with more Appian endpoints, different models, or custom system prompts
AI agent managed tables and views with 🛠️ Coda tool MCP server 💪 18 operations
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all Coda Tool operations to AI agents. Zero configuration needed - all 18 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Coda Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Coda Tool tool with full error handling 📋 Available Operations (18 total) Every possible Coda Tool operation is included: 🔧 Control (2 operations) • Get a control • Get many controls 🔧 Formula (2 operations) • Get a formula • Get many formulas 🔧 Table (7 operations) • Create a row • Delete a row • Get all columns • Get all rows • Get a column • Get a row • Push a button 🔧 View (7 operations) • Delete a view row • Get a view • Get all view columns • Get many views • Get a view row • Push a view button • Update a view row 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Coda Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Coda Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
AI-powered n8n release notes summary notifications via Gmail with GPT-5-Mini
Who's it for This workflow is perfect for n8n users and teams who want to stay up-to-date with the latest n8n releases without manually checking GitHub. Get AI-powered summaries of new features and bug fixes delivered straight to your inbox. What it does This workflow automatically monitors the n8n GitHub releases page and sends you smart email notifications when new updates are published. It fetches release notes, filters them based on your schedule (daily, weekly, etc.), and uses OpenAI to generate concise summaries highlighting the most important bug fixes and features. The summaries are then formatted into a clean HTML email and sent via Gmail. How to set up Configure the Schedule Trigger - Set how often you want to check for updates (daily, weekly, etc.) Add OpenAI credentials - Connect your OpenAI API key or use a different LLM Add Gmail credentials - Connect your Google account Set recipient email - Update the "To" email address in the Gmail node Activate the workflow and you're done! Requirements OpenAI API account (or alternative LLM) Gmail account with n8n credentials configured How to customize Adjust the schedule trigger to match your preferred notification frequency The filtering logic automatically adapts to your schedule (24 hours for daily, 7 days for weekly, etc.) Modify the AI prompt to focus on different aspects of the release notes Customize the HTML email template to match your preferences