Back to Catalog

Track expenses automatically with Telegram Bot using GPT-4o, OCR and voice recognition

Dmitrij ZykovicDmitrij Zykovic
897 views
2/3/2026
Official Page

Personal Expense Tracker Bot πŸ’°

AI-powered Telegram bot for effortless expense tracking. Send receipts, voice messages, or text - the bot automatically extracts and categorizes your expenses.

✨ Key Features

  • πŸ“Έ Receipt & Invoice OCR - Send photos of receipts or PDF invoices, AI extracts expense data automatically
  • 🎀 Voice Messages - Speak your expenses naturally, audio is transcribed and processed
  • πŸ’¬ Natural Language - Just type "spent 50 on groceries" or any text format
  • 🌍 Multilingual - Processes documents in any language (EN, DE, PT, etc.)
  • πŸ“Š Smart Statistics - Get monthly totals, category breakdowns, multi-month comparisons
  • πŸ”’ Private & Secure - Single-user authorization, only you can access your data
  • ⚑ Zero Confirmation - Expenses are added instantly, no annoying "confirm?" prompts

🎯 How It Works

  1. Send expense data via Telegram:

    • Photo of receipt
    • PDF invoice
    • Voice message
    • Text message
  2. AI processes automatically:

    • Extracts amount, date, vendor
    • Categorizes expense
    • Stores in organized format
  3. Query your expenses:

    • "Show my expenses for November"
    • "How much did I spend on groceries?"
    • "Compare last 3 months"

πŸ“‹ Expense Categories

Groceries, Transportation, Housing, Utilities, Healthcare, Entertainment, Dining Out, Clothing, Education, Subscriptions, Personal Care, Gifts, Travel, Sports, Other

πŸ”§ Setup Requirements

1. Telegram Bot

Create a Telegram bot via @BotFather and get your API token.

Configure credentials for nodes:

  • Input, WelcomeMessage,
  • GetAudioFile, GetAttachedFile, GetAttachedPhoto
  • ReplyText, NotAuthorizedMessage, DeleteProcessing

2. OpenRouter API

Get API key from OpenRouter for AI processing.

Configure credentials for:

  • Gpt4o (main processing)
  • Sonnet45 (expense assistant)

3. Ainoflow API

Get API key from Ainoflow for storage and OCR.

Configure Bearer credentials for:

  • GetConfig, SaveConfig
  • ExtractFileText, ExtractImageText
  • TranscribeRecording
  • JsonStorageMcp (MCP tool)

πŸ—οΈ Workflow Architecture

| Section | Description | |---------|-------------| | Message Trigger | Receives all Telegram messages | | Bot Privacy | Locks bot to first user, rejects unauthorized access | | Chat Message / Audio | Routes text and voice messages to AI | | Document / Photo | Extracts text from files via OCR and forwards to AI | | Root Agent | Routes messages to Expense Assistant, validates responses | | Expense Assistant | Core logic: stores expenses, calculates statistics | | Result / Reply | Sends formatted response back to Telegram | | Cleanup / Reset | Manual trigger to delete all data (⚠️ use with caution) |

πŸ’¬ Usage Examples

Adding Expenses

πŸ“Έ [Send receipt photo]
β†’ Added: 45.50 EUR - Groceries (Lidl)

🎀 "Bought coffee for five euros"  
β†’ Added: 5.00 EUR - Dining Out (coffee)

πŸ’¬ "50 uber"
β†’ Added: 50.00 EUR - Transportation (uber)

Querying Expenses

"Show my expenses"
β†’ November 2025: 1,250.50 EUR (23 expenses)
   Top: Groceries 450€, Transportation 280€, Dining 220€

"How much on entertainment this month?"
β†’ Entertainment: 85.00 EUR (3 expenses)

"Compare October and November"  
β†’ Oct: 980€ | Nov: 1,250€ (+27%)

πŸ“¦ Data Storage

Expenses are stored in JSON format organized by month (YYYY-MM):

{
  "id": "uuid",
  "amount": 45.50,
  "currency": "EUR",
  "category": "Groceries",
  "description": "Store name",
  "date": "2025-11-10T14:30:00Z",
  "created_at": "2025-11-10T14:35:22Z"
}

⚠️ Important Notes

  • First user locks the bot - Run /start to claim ownership
  • Default currency is EUR - AI auto-detects other currencies
  • Cleanup deletes ALL data - Use manual trigger with caution
  • No confirmation for adding - Only delete operations ask for confirmation

πŸ› οΈ Customization

  • Change default currency in agent prompts
  • Add/modify expense categories in ExpenseAssistant
  • Extend Root Agent with additional assistants
  • Adjust AI models (swap GPT-4o/Sonnet as needed)

πŸ“š Related Resources

πŸ’Ό Need Customization?

Want to adapt this template for your specific needs? Custom integrations, additional features, or enterprise deployment?

Contact us at Ainova Systems - We build AI automation solutions for businesses.


Tags: telegram, expense-tracker, ai-agent, ocr, voice-to-text, openrouter, mcp-tools, personal-finance

n8n Telegram Expense Tracker with GPT-4o OCR and Voice Recognition

This n8n workflow automates the process of tracking expenses using a Telegram bot. It leverages GPT-4o for OCR (Optical Character Recognition) and voice recognition to extract expense details from images or voice messages, and then allows for confirmation and recording of these expenses.

What it does

This workflow simplifies expense tracking by:

  1. Listening for Telegram Messages: It acts as a Telegram bot, listening for incoming messages, which can be images of receipts or voice notes.
  2. Categorizing Input: It intelligently determines if the incoming message is a photo, voice message, or a text message.
  3. Processing Photos with GPT-4o OCR: If an image (receipt) is sent, it uses a custom AI agent (powered by GPT-4o) and a Model Context Protocol (MCP) client tool to perform OCR and extract relevant expense details (e.g., amount, vendor, date).
  4. Processing Voice Messages with GPT-4o Voice Recognition: If a voice message is sent, it uses the AI agent to transcribe the voice message and extract expense details.
  5. Handling Text Commands: If a text message is sent, it checks for specific commands or queries.
  6. Confirming Expenses: After extracting details, it sends a confirmation message back to the user via Telegram, allowing them to verify the extracted information.
  7. Recording Expenses: Upon confirmation, the expense details can be recorded (though the exact recording mechanism is not fully defined in the provided JSON, it's a common next step for such a workflow).
  8. Providing AI Chat/Thought Capabilities: The workflow includes an AI Agent with a "Think" tool and a "Calculator" tool, suggesting it can handle more complex queries or perform calculations related to expenses if prompted.

Prerequisites/Requirements

To use this workflow, you will need:

  • n8n Instance: A running instance of n8n.
  • Telegram Bot: A Telegram bot token and a chat ID where the bot will operate.
  • OpenRouter Account/API Key: For the OpenRouter Chat Model which likely powers the GPT-4o capabilities. This implies access to a large language model capable of OCR and voice recognition.
  • Model Context Protocol (MCP) Client: This suggests an external or custom service for advanced context processing, likely for OCR or specific data extraction.
  • Langchain Nodes: The workflow uses several Langchain nodes (AI Agent, Simple Memory, Calculator, Think Tool, MCP Client Tool, OpenRouter Chat Model, AI Agent Tool), indicating a need for the @n8n/n8n-nodes-langchain package to be installed in your n8n instance.

Setup/Usage

  1. Import the Workflow: Download the workflow JSON and import it into your n8n instance.
  2. Configure Telegram Credentials:
    • Set up a Telegram credential in n8n with your bot token.
    • Ensure the Telegram Trigger node is configured to listen for messages in your desired chat.
  3. Configure OpenRouter Credentials:
    • Set up an OpenRouter credential in n8n with your API key.
    • Ensure the OpenRouter Chat Model node is using the correct credential.
  4. Configure MCP Client (if necessary): If the MCP Client Tool requires specific configuration or API keys, ensure these are set up.
  5. Activate the Workflow: Once all credentials are set and nodes are configured, activate the workflow.
  6. Interact with the Bot: Send images of receipts or voice messages to your Telegram bot. The bot will process them and respond with extracted expense details for confirmation. You can also send text messages to interact with the AI agent.

Related Templates

Two-way property repair management system with Google Sheets & Drive

This workflow automates the repair request process between tenants and building managers, keeping all updates organized in a single spreadsheet. It is composed of two coordinated workflows, as two separate triggers are required β€” one for new repair submissions and another for repair updates. A Unique Unit ID that corresponds to individual units is attributed to each request, and timestamps are used to coordinate repair updates with specific requests. General use cases include: Property managers who manage multiple buildings or units. Building owners looking to centralize tenant repair communication. Automation builders who want to learn multi-trigger workflow design in n8n. --- βš™οΈ How It Works Workflow 1 – New Repair Requests Behind the Scenes: A tenant fills out a Google Form (β€œRepair Request Form”), which automatically adds a new row to a linked Google Sheet. Steps: Trigger: Google Sheets rowAdded – runs when a new form entry appears. Extract & Format: Collects all relevant form data (address, unit, urgency, contacts). Generate Unit ID: Creates a standardized identifier (e.g., BUILDING-UNIT) for tracking. Email Notification: Sends the building manager a formatted email summarizing the repair details and including a link to a Repair Update Form (which activates Workflow 2). --- Workflow 2 – Repair Updates Behind the Scenes:\ Triggered when the building manager submits a follow-up form (β€œRepair Update Form”). Steps: Lookup by UUID: Uses the Unit ID from Workflow 1 to find the existing row in the Google Sheet. Conditional Logic: If photos are uploaded: Saves each image to a Google Drive folder, renames files consistently, and adds URLs to the sheet. If no photos: Skips the upload step and processes textual updates only. Merge & Update: Combines new data with existing repair info in the same spreadsheet row β€” enabling a full repair history in one place. --- 🧩 Requirements Google Account (for Forms, Sheets, and Drive) Gmail/email node connected for sending notifications n8n credentials configured for Google API access --- ⚑ Setup Instructions (see more detail in workflow) Import both workflows into n8n, then copy one into a second workflow. Change manual trigger in workflow 2 to a n8n Form node. Connect Google credentials to all nodes. Update spreadsheet and folder IDs in the corresponding nodes. Customize email text, sender name, and form links for your organization. Test each workflow with a sample repair request and a repair update submission. --- πŸ› οΈ Customization Ideas Add Slack or Telegram notifications for urgent repairs. Auto-create folders per building or unit for photo uploads. Generate monthly repair summaries using Google Sheets triggers. Add an AI node to create summaries/extract relevant repair data from repair request that include long submissions.

Matt@VeraisonLabsBy Matt@VeraisonLabs
208

Automate invoice processing with OCR, GPT-4 & Salesforce opportunity creation

PDF Invoice Extractor (AI) End-to-end pipeline: Watch Drive ➜ Download PDF ➜ OCR text ➜ AI normalize to JSON ➜ Upsert Buyer (Account) ➜ Create Opportunity ➜ Map Products ➜ Create OLI via Composite API ➜ Archive to OneDrive. --- Node by node (what it does & key setup) 1) Google Drive Trigger Purpose: Fire when a new file appears in a specific Google Drive folder. Key settings: Event: fileCreated Folder ID: google drive folder id Polling: everyMinute Creds: googleDriveOAuth2Api Output: Metadata { id, name, ... } for the new file. --- 2) Download File From Google Purpose: Get the file binary for processing and archiving. Key settings: Operation: download File ID: ={{ $json.id }} Creds: googleDriveOAuth2Api Output: Binary (default key: data) and original metadata. --- 3) Extract from File Purpose: Extract text from PDF (OCR as needed) for AI parsing. Key settings: Operation: pdf OCR: enable for scanned PDFs (in options) Output: JSON with OCR text at {{ $json.text }}. --- 4) Message a model (AI JSON Extractor) Purpose: Convert OCR text into strict normalized JSON array (invoice schema). Key settings: Node: @n8n/n8n-nodes-langchain.openAi Model: gpt-4.1 (or gpt-4.1-mini) Message role: system (the strict prompt; references {{ $json.text }}) jsonOutput: true Creds: openAiApi Output (per item): $.message.content β†’ the parsed JSON (ensure it’s an array). --- 5) Create or update an account (Salesforce) Purpose: Upsert Buyer as Account using an external ID. Key settings: Resource: account Operation: upsert External Id Field: taxid_c External Id Value: ={{ $json.message.content.buyer.tax_id }} Name: ={{ $json.message.content.buyer.name }} Creds: salesforceOAuth2Api Output: Account record (captures Id) for downstream Opportunity. --- 6) Create an opportunity (Salesforce) Purpose: Create Opportunity linked to the Buyer (Account). Key settings: Resource: opportunity Name: ={{ $('Message a model').item.json.message.content.invoice.code }} Close Date: ={{ $('Message a model').item.json.message.content.invoice.issue_date }} Stage: Closed Won Amount: ={{ $('Message a model').item.json.message.content.summary.grand_total }} AccountId: ={{ $json.id }} (from Upsert Account output) Creds: salesforceOAuth2Api Output: Opportunity Id for OLI creation. --- 7) Build SOQL (Code / JS) Purpose: Collect unique product codes from AI JSON and build a SOQL query for PricebookEntry by Pricebook2Id. Key settings: pricebook2Id (hardcoded in script): e.g., 01sxxxxxxxxxxxxxxx Source lines: $('Message a model').first().json.message.content.products Output: { soql, codes } --- 8) Query PricebookEntries (Salesforce) Purpose: Fetch PricebookEntry.Id for each Product2.ProductCode. Key settings: Resource: search Query: ={{ $json.soql }} Creds: salesforceOAuth2Api Output: Items with Id, Product2.ProductCode (used for mapping). --- 9) Code in JavaScript (Build OLI payloads) Purpose: Join lines with PBE results and Opportunity Id ➜ build OpportunityLineItem payloads. Inputs: OpportunityId: ={{ $('Create an opportunity').first().json.id }} Lines: ={{ $('Message a model').first().json.message.content.products }} PBE rows: from previous node items Output: { body: { allOrNone:false, records:[{ OpportunityLineItem... }] } } Notes: Converts discount_total ➜ per-unit if needed (currently commented for standard pricing). Throws on missing PBE mapping or empty lines. --- 10) Create Opportunity Line Items (HTTP Request) Purpose: Bulk create OLIs via Salesforce Composite API. Key settings: Method: POST URL: https://<your-instance>.my.salesforce.com/services/data/v65.0/composite/sobjects Auth: salesforceOAuth2Api (predefined credential) Body (JSON): ={{ $json.body }} Output: Composite API results (per-record statuses). --- 11) Update File to One Drive Purpose: Archive the original PDF in OneDrive. Key settings: Operation: upload File Name: ={{ $json.name }} Parent Folder ID: onedrive folder id Binary Data: true (from the Download node) Creds: microsoftOneDriveOAuth2Api Output: Uploaded file metadata. --- Data flow (wiring) Google Drive Trigger β†’ Download File From Google Download File From Google β†’ Extract from File β†’ Update File to One Drive Extract from File β†’ Message a model Message a model β†’ Create or update an account Create or update an account β†’ Create an opportunity Create an opportunity β†’ Build SOQL Build SOQL β†’ Query PricebookEntries Query PricebookEntries β†’ Code in JavaScript Code in JavaScript β†’ Create Opportunity Line Items --- Quick setup checklist πŸ” Credentials: Connect Google Drive, OneDrive, Salesforce, OpenAI. πŸ“‚ IDs: Drive Folder ID (watch) OneDrive Parent Folder ID (archive) Salesforce Pricebook2Id (in the JS SOQL builder) 🧠 AI Prompt: Use the strict system prompt; jsonOutput = true. 🧾 Field mappings: Buyer tax id/name β†’ Account upsert fields Invoice code/date/amount β†’ Opportunity fields Product name must equal your Product2.ProductCode in SF. βœ… Test: Drop a sample PDF β†’ verify: AI returns array JSON only Account/Opportunity created OLI records created PDF archived to OneDrive --- Notes & best practices If PDFs are scans, enable OCR in Extract from File. If AI returns non-JSON, keep β€œReturn only a JSON array” as the last line of the prompt and keep jsonOutput enabled. Consider adding validation on parsing.warnings to gate Salesforce writes. For discounts/taxes in OLI: Standard OLI fields don’t support per-line discount amounts directly; model them in UnitPrice or custom fields. Replace the Composite API URL with your org’s domain or use the Salesforce node’s Bulk Upsert for simplicity.

Le NguyenBy Le Nguyen
942

AI-powered candidate nurturing with scheduled WhatsApp & Gmail follow-ups

What This Workflow Does This workflow automates the candidate nurturing process, solving the common problem of candidates losing interest or "ghosting" after an application. It keeps them engaged and informed by sending a personalized, multi-channel (WhatsApp & Gmail) sequence of follow-up messages over their first week. The automation triggers when a new candidate is added to your ATS (e.g., via a Recrutei webhook). It then uses AI to generate a custom 3-part message (for Day 1, Day 3, and Day 7) tailored to the candidate's age and the specific job they applied for, ensuring a professional and empathetic experience that strengthens your employer brand. How it Works Trigger: A Webhook node captures the new candidate data from your Applicant Tracking System (ATS) or form. Data Preparation: Two Code nodes clean the incoming data. The first (Separating information) extracts key fields and formats the phone number. The second (Extract age) calculates the candidate's age from their birthday to be used by the AI. AI Content Generation: The workflow sends the candidate's details (name, age, job title) to an AI model (AI Recruitment Assistant). The AI has a detailed system prompt to generate three distinct messages for Day 1 (Thank You), Day 3 (Friendly Reminder), and Day 7 (Final Reinforcement), adapting its tone based on the candidate's age. Split Messages: A Code node (Separating messages per days) receives the single text block from the AI and splits it into three separate variables (day1, day3, day7). Day 1 Send: The workflow immediately sends the day1 message via both Gmail and WhatsApp (configured for Evolution API). Day 3 Send: A "Wait" node pauses the workflow for 2 days, after which it sends the day3 message. Day 7 Send: Another "Wait" node pauses for 4 more days, then sends the final day7 message, completing the 7-day nurturing sequence. Setup Instructions This workflow is plug-and-play once you configure the following 5 steps: Webhook Node: Copy the Test URL from the Webhook node and configure it in your ATS (e.g., Recrutei) or form builder to trigger whenever a new candidate is added. Run one test submission to make the data structure visible to n8n. AI Credentials: In the AI Recruitment Assistant node, select or create your OpenAI API credential. MCP Credential (Optional): If you use a Recrutei MCP, paste your endpoint URL into the MCP Recrutei node. Gmail Credentials: In all three Message Gmail nodes (Day 1, 3, 7), select or create your Gmail (OAuth2) credential. Optional:* In the same nodes, go to Options and change the Sender Name from your_company to your actual company name. WhatsApp (Evolution API): This template is pre-configured for the Evolution API. In all three Message WhatsApp nodes (Day 1, 3, 7), you must: URL: Replace {server-url} and {instance} with your Evolution API details. Headers: In the "Header Parameters" section, replace yourapikey with your actual Evolution API key.

Recrutei  AutomaΓ§Γ΅es By Recrutei AutomaΓ§Γ΅es
54