Remote IOT sensor monitoring via MQTT and InfluxDB
.
Read and store IOT sensor data with the MQTT Trigger and InfluxDB
tonyduffy@protonmail.com
This workflow is for users wanting a practical example of how to obtain data from remote IOT systems using the MQTT protocol in an n8n environment.
The template provides typical n8n node implementation and configuration settings necessary to read and store IOT data.
The workflow reads the temperature and humidity data from a remote IOT system in this case a DHT22 sensor connected to a ESP32 micro controller. The data is parsed into the correct JSON format and then ingested in an InfluxDB data bucket. From there the stored temperature and humidity values can be displayed in real time.
The workflow can be easily modified to read any MQTT driven device data.
Remote IOT Sensor Setup
The ESP32 controller with the DHT22 sensor are running on a Wokwi simulator. The simulator uses micro python to publish a MQTT "wokwi-weather" topic with the temperature and humidity payloads to an online Mosquitto MQTT broker. The n8n MQTT trigger node subscribes to the topic on the broker and reads the payload values when any changes are published. The code node then prepares the payload for JSON format. The HTTP request node ingests the data in a InfluxDB bucket
How to customise this workflow to your needs
Wokwi IOT ESP32 simulator
- You will need to setup a free account at Wokwi.com
- Once created search for a project "Micro-Python MQTT Weather Logger (ESP32)"
- Then when the MQTT weather logger project is open change lines 28 and 29 to the following
- 28 MQTT_CLIENT_ID = ""
- 29 MQTT_BROKER = "test.mosquitto.org"
- You then can start the simulation by clicking on the green arrow and it will connect the mosquitto broker and the "wokwi-weather" topic will be published.
- By clicking on the DHT22 sensor the temperature and humidity bar will appear and you can change the values to send updated payload values to the broker.
InfluxDB
You will require access to functioning InfluxDB database to utilise this workflow
Note : You will have to provide the following for the HTTP request node to connect to InfluxDB.
- The URL and port of the desired InfluxDB (In this case the InfluxDB is running locally on port 8086 ie. http://localhost:8086.)
- InfluxDB bucket for the data. ( In this case the created bucket name is "wokwi-data")
- The Organization ID of the InfluxDB. This can be obtained for the InfluxDB admin page
- A generated API token to read and write to the InfluxDB bucket. Created from the InfluxDB admin
n8n workflow.
- The MQTT trigger node is configured to subscribe to the "wokwi-weather" topic on the test Mosquitto MQTT broker. It reads the temperature and humidity data sent by ESP32.
- The code node uses Javascript to move the temperature and humidity payloads to JSON format. This is flexible and can easily modified.
- The HTTP request node posts the JSON payloads to the InfluxDB bucket.
When the above is configured the workflow should function correctly.
Thanks to the many who have downloaded this template. Let me know on what you would like to build. Contact me at tonyduffy@protonmail.com
Remote IoT Sensor Monitoring via MQTT and InfluxDB (Placeholder)
NOTE: The provided JSON workflow is incomplete. It contains an MQTT Trigger, a Code node, an HTTP Request node, and a Sticky Note, but lacks any connections between them. This README describes the potential purpose based on the node types, but the actual functionality is not fully defined by the JSON.
This n8n workflow is designed to receive data from IoT sensors via MQTT, process it, and then potentially send it to an external service (like InfluxDB for time-series data storage) using an HTTP request. It aims to simplify the ingestion and handling of real-time sensor data.
What it does
Based on the available nodes, this workflow could perform the following steps:
- Listens for MQTT Messages: The workflow starts by subscribing to an MQTT topic, waiting for incoming messages from IoT sensors.
- Processes Data (Code Node): It includes a Code node, indicating that the received MQTT payload would likely be transformed, parsed, or enriched using custom JavaScript logic. This could involve extracting specific sensor readings, adding timestamps, or reformatting the data.
- Sends Data via HTTP Request: An HTTP Request node is present, suggesting that the processed sensor data would then be sent to an external API endpoint. This is commonly used to push data to databases like InfluxDB, data warehouses, or other monitoring platforms.
- Provides Documentation (Sticky Note): A Sticky Note is included, likely for internal documentation within the workflow, explaining its purpose or specific configuration details.
Prerequisites/Requirements
- MQTT Broker: An accessible MQTT broker where your IoT sensors publish data.
- External API/Service: An endpoint for an external service (e.g., InfluxDB, a custom API) configured to receive the processed sensor data via HTTP.
- n8n Instance: A running instance of n8n.
Setup/Usage
- Import the workflow: Import the provided JSON into your n8n instance.
- Configure the MQTT Trigger:
- Set up your MQTT credentials if not already configured.
- Specify the MQTT topic(s) you wish to subscribe to for sensor data.
- Configure the Code Node:
- Edit the Code node to implement the necessary JavaScript logic to parse and transform your incoming MQTT messages into the desired format for your external service.
- Configure the HTTP Request Node:
- Set the
URLto the endpoint of your external service (e.g., InfluxDB write API). - Configure the
Method(e.g.,POST) andHeaders(e.g.,Authorization,Content-Type) as required by your external service. - Map the data from the previous Code node as the
Bodyof the HTTP request.
- Set the
- Activate the workflow: Once configured, activate the workflow to start processing incoming MQTT messages.
Note: As the workflow JSON is incomplete (no connections), you will need to manually connect the nodes in n8n after importing to establish the data flow as described above.
Related Templates
Automate Dutch Public Procurement Data Collection with TenderNed
TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch π LinkedIn β Wessel Bulte
π How to transform unstructured email data into structured format with AI agent
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From β senderβs email address To β recipientβs email address Subject β email subject line Summary β short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits β Full Automation: Eliminates manual reading and data entry from incoming emails. β Multi-Source Integration: Handles data from different email providers seamlessly. β AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. β Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. β Time Efficiency: Processes emails in real time, improving productivity and response speed. *β Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Dynamic Hubspot lead routing with GPT-4 and Airtable sales team distribution
AI Agent for Dynamic Lead Distribution (HubSpot + Airtable) π§ AI-Powered Lead Routing and Sales Team Distribution This intelligent n8n workflow automates end-to-end lead qualification and allocation by integrating HubSpot, Airtable, OpenAI, Gmail, and Slack. The system ensures that every new lead is instantly analyzed, scored, and routed to the best-fit sales representative β all powered by AI logic, sir. --- π‘ Key Advantages β‘ Real-Time Lead Routing Automatically assigns new leads from HubSpot to the most relevant sales rep based on region, capacity, and expertise. π§ AI Qualification Engine An OpenAI-powered Agent evaluates the leadβs industry, region, and needs to generate a persona summary and routing rationale. π Centralized Tracking in Airtable Every lead is logged and updated in Airtable with AI insights, rep details, and allocation status for full transparency. π¬ Instant Notifications Slack and Gmail integrations alert the assigned rep immediately with full lead details and AI-generated notes. π Seamless CRM Sync Updates the original HubSpot record with lead persona, routing info, and timeline notes for audit-ready history, sir. --- βοΈ How It Works HubSpot Trigger β Captures a new lead as soon as itβs created in HubSpot. Fetch Contact Data β Retrieves all relevant fields like name, company, and industry. Clean & Format Data β A Code node standardizes and structures the data for consistency. Airtable Record Creation β Logs the lead data into the βLeadsβ table for centralized tracking. AI Agent Qualification β The AI analyzes the lead using the TeamDatabase (Airtable) to find the ideal rep. Record Update β Updates the same Airtable record with the assigned team and AI persona summary. Slack Notification β Sends a real-time message tagging the rep with lead info. Gmail Notification β Sends a personalized handoff email with context and follow-up actions. HubSpot Sync β Updates the original contact in HubSpot with the assignment details and AI rationale, sir. --- π οΈ Setup Steps Trigger Node: HubSpot β Detect new leads. HubSpot Node: Retrieve complete lead details. Code Node: Clean and normalize data. Airtable Node: Log lead info in the βLeadsβ table. AI Agent Node: Process lead and match with sales team. Slack Node: Notify the designated representative. Gmail Node: Email the rep with details. HubSpot Node: Update CRM with AI summary and allocation status, sir. --- π Credentials Required HubSpot OAuth2 API β To fetch and update leads. Airtable Personal Access Token β To store and update lead data. OpenAI API β To power the AI qualification and matching logic. Slack OAuth2 β For sending team notifications. Gmail OAuth2 β For automatic email alerts to assigned reps, sir. --- π€ Ideal For Sales Operations and RevOps teams managing multiple regions B2B SaaS and enterprise teams handling large lead volumes Marketing teams requiring AI-driven, bias-free lead assignment Organizations optimizing CRM efficiency with automation, sir --- π¬ Bonus Tip You can easily extend this workflow by adding lead scoring logic, language translation for follow-ups, or Salesforce integration. The entire system is modular β perfect for scaling across global sales teams, sir.