Back to Catalog
Thiago Vazzoler Loureiro

Thiago Vazzoler Loureiro

Building Custom Automations with n8n | Node.js Developer | Integration Specialist | Open Source Contributor If you need help with n8n workflows, API integrations, or custom nodes, feel free to connect on LinkedIn: http://linkedin.com/in/thiago-vazzoler-loureiro-24056227

Total Views3,344
Templates3

Templates by Thiago Vazzoler Loureiro

WhatsApp to Chatwoot message forwarder with media support

Description Automates the forwarding of messages from WhatsApp (via Evolution API) to Chatwoot, enabling seamless integration between external WhatsApp users and internal Chatwoot agents. It supports both text and media messages, ensuring that customer conversations are centralized and accessible for support teams. What Problem Does This Solve? Managing conversations across multiple platforms can lead to fragmented support and lost context. This subworkflow bridges the gap between WhatsApp and Chatwoot, automatically forwarding messages received via the Evolution API to a Chatwoot inbox. It simplifies communication flow, centralizes conversations, and enhances the support team's productivity. Features Support for plain text messages Support for media messages: images, videos, documents, and audio Automatic media upload to Chatwoot with proper attachment rendering Automatic contact association using WhatsApp number and Chatwoot API Designed to work with Evolution API webhooks or any message source Prerequisites Before using this automate, make sure you have: Evolution API credentials with incoming message webhook configured A Chatwoot instance with access token and API endpoint An existing Chatwoot inbox (preferably API channel) A configured HTTP Request node in n8n for Chatwoot API calls Suggested Usage This subworkflow should be attached to a parent workflow that receives WhatsApp messages via the Evolution API webhook. Ideal for: Centralized customer service operations WhatsApp-to-CRM/chat routing Hybrid automation workflows where human agents need to reply from Chatwoot It ensures that all incoming WhatsApp messages are properly converted and forwarded to Chatwoot, preserving message content and structure.

Thiago Vazzoler LoureiroBy Thiago Vazzoler Loureiro
2581

Vectorize Medical Procedures for Semantic Search with TUSS, Gemini & pgVector

Description This workflow vectorizes the TUSS (Terminologia Unificada da Saúde Suplementar) table by transforming medical procedures into vector embeddings ready for semantic search. It automates the import of TUSS data, performs text preprocessing, and uses Google Gemini to generate vector embeddings. The resulting vectors can be stored in a vector database, such as PostgreSQL with pgvector, enabling efficient semantic queries across healthcare data. What Problem Does This Solve? Searching for medical procedures using traditional keyword matching is often imprecise. This workflow enhances the search experience by enabling semantic similarity search, which can retrieve more relevant results based on the meaning of the query instead of exact word matches. How It Works Import TUSS data: Load medical procedure entries from the TUSS table. Preprocess text: Clean and prepare the text for embedding. Generate embeddings: Use Google Gemini to convert each procedure into a semantic vector. Store vectors: Save the output in a PostgreSQL database with the pgvector extension. Prerequisites An n8n instance (self-hosted). A PostgreSQL database with the pgvector extension enabled. Access to the Google Gemini API. TUSS data in a structured format (CSV, database, or API source). Customization Tips You can adapt the preprocessing logic to your own language or domain-specific terms. Swap Google Gemini with another embedding model, such as OpenAI or Cohere. Adjust the chunking logic to control the granularity of semantic representation. Setup Instructions Prepare a source (database or CSV) with TUSS data. You need at least two fields: CD_ITEM (Medical procedure code) DS_ITEM (Medical procedure description) Configure your Oracle or PostgreSQL database credentials in the Credentials section of n8n. Make sure your PostgreSQL database has pgVector installed. Replace the placeholder table and column names with your actual TUSS table. Connect your Google Gemini credentials (via OpenAI proxy or official connector). Run the workflow to vectorize all medical procedure descriptions.

Thiago Vazzoler LoureiroBy Thiago Vazzoler Loureiro
504

Build a GLPI knowledge base RAG pipeline with Google Gemini and PostgreSQL

Description This workflow automates the creation of a Retrieval-Augmented Generation (RAG) pipeline using content from the GLPI Knowledge Base. It retrieves and processes FAQ articles directly via the GLPI API, cleans and vectorizes the content using pgvector in PostgreSQL, and prepares the data for use by LLM-powered AI agents. What Problem Does This Solve? Manually building a RAG pipeline from a GLPI knowledge base requires integrating multiple tools, cleaning data, and managing embeddings—tasks that are often complex and repetitive. This subworkflow simplifies the entire process by automating data retrieval, transformation, and vector storage, allowing you to focus on building intelligent support agents or chatbots powered by your internal documentation. Features Connects to GLPI via API to fetch FAQ articles Cleans and normalizes content for better embedding quality Generates vector embeddings using Google Gemini (or another model) Stores embeddings in a PostgreSQL database with pgvector Fully modular: easily integrate with any RAG-ready LLM pipeline Prerequisites Before using this subworkflow, make sure you have: A GLPI instance installed on a Linux server with API access enabled A PostgreSQL database with the pgvector extension installed An OpenAI API key (or alternative embedding provider) n8n instance (self-hosted or cloud) Suggested Usage This subworkflow is intended to be part of a larger AI pipeline. Attach it to a scheduled workflow (e.g. daily sync) or use it in response to updates in your GLPI base. Ideal for internal support bots, IT documentation assistants, and help desk AI agents that rely on up-to-date knowledge.

Thiago Vazzoler LoureiroBy Thiago Vazzoler Loureiro
259
All templates loaded