Anderson Adelino
Templates by Anderson Adelino
Create voice assistant interface with OpenAI GPT-4o-mini and text-to-speech
Voice Assistant Interface with n8n and OpenAI This workflow creates a voice-activated AI assistant interface that runs directly in your browser. Users can click on a glowing orb to speak with the AI, which responds with voice using OpenAI's text-to-speech capabilities. Who is it for? This template is perfect for: Developers looking to add voice interfaces to their applications Customer service teams wanting to create voice-enabled support systems Content creators building interactive voice experiences Anyone interested in creating their own "Alexa-like" assistant How it works The workflow consists of two main parts: Frontend Interface: A beautiful animated orb that users click to activate voice recording Backend Processing: Receives the audio transcription, processes it through an AI agent with memory, and returns voice responses The system uses: Web Speech API for voice recognition (browser-based) OpenAI GPT-4o-mini for intelligent responses OpenAI Text-to-Speech for voice synthesis Session memory to maintain conversation context Setup requirements n8n instance (self-hosted or cloud) OpenAI API key with access to: GPT-4o-mini model Text-to-Speech API Modern web browser with Web Speech API support (Chrome, Edge, Safari) How to set up Import the workflow into your n8n instance Add your OpenAI credentials to both OpenAI nodes Copy the webhook URL from the "Audio Processing Endpoint" node Edit the "Voice Assistant UI" node and replace YOURWEBHOOKURL_HERE with your webhook URL Access the "Voice Interface Endpoint" webhook URL in your browser Click the orb and start talking! How to customize the workflow Change the AI personality: Edit the system message in the "Process User Query" node Modify the visual style: Customize the CSS in the "Voice Assistant UI" node Add more capabilities: Connect additional tools to the AI Agent Change the voice: Select a different voice in the "Generate Voice Response" node Adjust memory: Modify the context window length in the "Conversation Memory" node Demo Watch the template in action: https://youtu.be/0bMdJcRMnZY
Send private welcome messages to new WhatsApp group members with Evolution API
Who's it for This template is perfect for community managers, business owners, and WhatsApp group administrators who want to create a welcoming experience for new members. Whether you're running a support group, managing a business community, or moderating a hobby group, this automation ensures every new member feels valued from the moment they join. How it works The workflow automatically detects when someone joins your WhatsApp group and sends them a personalized welcome message directly to their private chat. It uses Evolution API to interface with WhatsApp Business and includes a natural delay to make the interaction feel more human. The entire process is hands-off once configured, ensuring consistent engagement with new members 24/7. What it does Monitors group activity - Receives real-time notifications when members join or leave Filters for your specific group - Ensures messages are only sent for your designated group Validates new joins - Confirms the event is a member joining (not leaving) Adds natural timing - Waits a customizable period before sending the message Sends private welcome - Delivers your welcome message directly to the new member's chat Requirements Evolution API instance (self-hosted or cloud service) WhatsApp Business account connected to Evolution API Group admin permissions for the WhatsApp group n8n instance (self-hosted or cloud) How to set up Import the workflow into your n8n instance Configure the Set Variables node with: Your WhatsApp group ID (format: xxxxxxxxxxxxx@g.us) Evolution API key Instance name from Evolution API Evolution API URL Custom welcome message Delay time in minutes Copy the webhook URL from the Webhook node Configure Evolution API to send group notifications to your webhook URL Test the workflow by having someone join your group Activate the workflow for continuous operation For a detailed video tutorial on setting up this workflow, check out: https://youtu.be/WO2MJoQqLvo How to customize the workflow Welcome message: Edit the message in the Set Variables node to match your group's tone Timing: Adjust the wait time to send messages immediately or after several minutes Multiple groups: Duplicate the workflow and change the group ID for each group Rich media: Extend the HTTP Request node to send images or documents with the welcome Conditional messages: Add IF nodes to send different messages based on time of day or member count Follow-up sequence: Chain additional HTTP Request nodes to create a welcome series
Document-based AI chatbot with RAG, OpenAI and Cohere reranker
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key Configure Supabase Vector Store First, create a table in Supabase with vector support: sql CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; Set up credentials Add the following credentials in n8n: OpenAI: Add your OpenAI API key Supabase: Add your Supabase URL and service role key Google Drive: Connect your Google account Cohere: Add your Cohere API key Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases: Create separate vector stores for different topics Custom prompts: Modify the agent's system message for specific use cases Language models: Switch between different OpenAI models or use other LLM providers Reranking settings: Adjust the top-k parameter for more or fewer search results Memory window: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support: Create bots that answer questions from product documentation HR Assistant: Build assistants that help employees find information in company policies Educational Tutor: Develop tutors that answer questions from course materials Research Assistant: Create tools that help researchers find relevant information in papers Legal Helper: Build assistants that search through legal documents and contracts