NovaNode
NovaNode is a software factory building scalable AI for modern businesses. We specialize in omni-channel AI agents that automate customer support, sales, and operations across voice, WhatsApp, email, and chat. We serve both enterprise clients and SMEs, balancing deep integrations with speed and cost-efficiency. By combining automation with adaptability, we help companies scale faster and operate smarter. NovaNode is where AI meets execution.
Templates by NovaNode
AI-powered WhatsApp chatbot for text, voice, images, and PDF with RAG
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
Build a chatbot with Reinforced Learning Human Feedback (RLHF) and RAG
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers who want to build an AI-powered knowledge assistant with retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF) via Telegram. What problem is this workflow solving? Manual knowledge management and answering support queries can be time-consuming and error-prone. This solution automates importing and indexing official documentation into MongoDB vector search and enhances AI responses with Telegram-based user feedback to continuously improve answer quality. What these workflows do Workflow 1: Document ingestion & indexing Manually triggered workflow imports product documentation from Google Docs. Documents are split into manageable chunks and embedded using OpenAI embeddings. Embedded document chunks are stored in MongoDB Atlas vector store to enable semantic search. Workflow 2: Telegram chat with RLHF feedback loop Listens for user messages via Telegram bot integration. Uses vector similarity search on MongoDB to retrieve relevant documentation chunks. Generates answers with OpenAI GPT-4o-mini model using retrieval-augmented generation. Sends answers back via Telegram and waits for user feedback (approval or disapproval). Captures feedback, maps it as positive or negative, and stores it with the conversation data for future model improvement. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat with Telegram RLHF Create a bot in Telegram with @botFather using the /newbot command. Connect the MongoDB database and search index used for vector search in the previous workflow. Also create two new collections in MongoDB Atlas: one for feedback and one for chat history. Create a search index for feedback, copying the provided template. Configure the AI system prompt in the “Knowledge Base Agent” node, making sure it references all three tools connected (productDocs, feedbackPositive, feedbackNegative) as provided in the template prompt. Make sure Product documentation and feedback collections must connect to the same MongoDB database. There are three distinct MongoDB collections: one for product documentation, one for feedback, and one for chat history (chat history collection can be separate). Telegram API credentials are valid and webhook URLs are correctly set up. MongoDB Search Index Templates Documentation Collection Index { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } } Feedback Collection Index { "mappings": { "dynamic": false, "fields": { "prompt": { "type": "string" }, "response": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "feedback": { "type": "token" } } } }