Davide
Full-stack Web Developer based in Italy specialising in Marketing & AI-powered automations. For business enquiries, send me an email at info@n3w.it or add me on Linkedin.com/in/davideboizza
Categories
Templates by Davide
AI-powered WhatsApp chatbot 🤖📲 for text, voice, images & PDFs with memory 🧠
This workflow is a highly advanced multimodal AI assistant designed to operate through WhatsApp. It can understand and respond to text, images, voice messages, and PDF documents by combining OpenAI models with smart logic to adapt to the content received. --- 🎯 Core Features 📥 1. Automatic Message Type Detection Using the Input type node, the bot detects whether the user has sent: Text Voice messages Images Files (PDF) Other unsupported content 💬 2. Smart Text Message Handling Text messages are processed by an OpenAI GPT-4o-mini agent with a customized system prompt. Replies are concise, accurate, and formatted for mobile readability. 🖼️ 3. Image Analysis & Description Images are downloaded, converted to base64, and analyzed by an image-aware AI model. The output is a rich, structured description, designed for visually impaired users or visual content interpretation. 🎙️ 4. Voice Message Transcription & Reply Audio messages are downloaded and transcribed using OpenAI Whisper. The transcribed text is analyzed and answered by the AI. Optionally, the AI reply can be converted back to voice using OpenAI's text-to-speech, and sent as an audio message. 📄 5. PDF Document Extraction & Summary Only PDFs are allowed (filtered via MIME type). The document’s content is extracted and combined with the user's message. The AI then provides a relevant summary or answer. 🧠 6. Contextual Memory Each user has a personalized session ID with a memory window of 10 interactions. This ensures a more natural and contextual conversation flow. --- How It Works Thisworkflow is designed to handle incoming WhatsApp messages and process different types of inputs (text, audio, images, and PDF documents) using AI-powered analysis. Here’s how it functions: Trigger: The workflow starts with the WhatsApp Trigger node, which listens for incoming messages (text, audio, images, or documents). Input Routing: The Input type (Switch node) checks the message type and routes it to the appropriate processing branch: Text: Directly forwards the message to the AI agent for response generation. Audio: Downloads the audio file, transcribes it using OpenAI, and sends the transcription to the AI agent. Image: Downloads the image, analyzes it with OpenAI’s GPT-4 model, and generates a detailed description. PDF Document: Downloads the file, extracts text, and processes it with the AI agent. Unsupported Formats: Sends an error message if the input is not supported. AI Processing: The AI Agent1 node, powered by OpenAI, processes the input (text, transcribed audio, image description, or PDF content) and generates a response. Response Handling: For audio inputs, the AI’s response is converted back into speech (using OpenAI’s TTS) and sent as a voice message. For other inputs, the response is sent as a text message via WhatsApp. Memory: The Simple Memory node maintains conversation context for follow-up interactions. Setup Steps To deploy this workflow in n8n, follow these steps: Configure WhatsApp API Credentials: Set up WhatsApp Business API credentials (Meta Developer Account). Add the credentials in the WhatsApp Trigger, Get Image/Audio/File URL, and Send Message nodes. Set Up OpenAI Integration: Provide an OpenAI API key in the Analyze Image, Transcribe Audio, Generate Audio Response, and AI Agent1 nodes. Adjust Input Handling (Optional): Modify the Switch node ("Input type") to handle additional message types if needed. Update the "Only PDF File" IF node to support other document formats. Test & Deploy: Activate the workflow and test with different message types (text, audio, image, PDF). Ensure responses are correctly generated and sent back via WhatsApp. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
AI voice chatbot with ElevenLabs & OpenAI for customer service and restaurants
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Create a FREE account on ElevenLabs Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'testchatbotelevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Automate SEO-Optimized WordPress posts with AI & Google Sheets
This workflow automates the process of creating a complete SEO-optimized blog post, including generating content, titles, images, and meta tags, and publishing it on WordPress. It leverages AI models (like DeepSeek and OpenRouter) for content generation and SEO optimization, and integrates with Google Sheets, WordPress, and OpenAI for image generation. This is a powerful tool for automating the creation and optimization of blog posts, saving time and ensuring high-quality, SEO-friendly content. It integrates multiple tools and AI models to deliver a seamless content creation experience. Below is a breakdown of the workflow: --- How It Works The workflow is designed to streamline the creation of SEO-friendly blog posts. Here's how it works: Trigger: The workflow starts with a Manual Trigger node, which initiates the process when the user clicks "Test workflow." Fetch Context: The Google Sheets node retrieves the blog post context (e.g., topic, keywords) from a predefined Google Sheet. Generate Content: The Generate Article node uses an AI model (DeepSeek) to create an SEO-friendly blog post based on the fetched context. The Generate Title node creates a compelling, keyword-rich title for the blog post. Publish to WordPress: The Add Draft to WP node creates a draft post in WordPress with the generated content and title. Generate and Upload Image: The Generate Image node uses OpenAI to create a realistic, blog-appropriate image. The Upload Image node uploads the image to WordPress media. The Set Image node associates the uploaded image as the featured image for the blog post. SEO Optimization: The SEO Expert node analyzes the blog post and generates optimized meta titles and descriptions using an AI model (OpenRouter). The Set Metatag node updates the WordPress post with the generated meta tags. Update Google Sheets: The Update Sheet and Finish Work nodes update the Google Sheet with the post's details, including the URL, title, and metadata. --- Set Up Steps To set up and use this workflow in n8n, follow these steps: Google Sheets Setup: Create a Google Sheet with columns for ID POST, PROMPT, TITLE, URL, METATITLE, and METADESCRIPTION. Link the Google Sheet to the Get Context node by providing the Document ID and Sheet Name. WordPress Integration: Set up WordPress credentials in n8n for the Add Draft to WP, Upload Image, and Set Image nodes. Ensure the WordPress site is accessible via its REST API. AI Model Configuration: Configure the DeepSeek and OpenRouter credentials in n8n for the Generate Article, Generate Title, and SEO Expert nodes. Ensure the AI models are correctly set up to generate content and meta tags. Image Generation: Set up OpenAI credentials for the Generate Image node to create blog post images. Meta Tag Optimization: The SEO Expert node uses OpenRouter to generate meta titles and descriptions. Ensure the node is configured to analyze the blog post content and produce SEO-friendly tags. Workflow Execution: Click the "Test workflow" button to trigger the workflow. The workflow will: Fetch the blog post context from Google Sheets. Generate the article, title, and image. Publish the draft to WordPress. Upload and set the featured image. Generate and apply meta tags. Update the Google Sheet with the post details. Optional Customization: Modify the workflow to include additional SEO optimizations, such as internal linking, keyword density analysis, or social media sharing. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Build a PDF Document RAG System with Mistral OCR, Qdrant and Gemini AI
This workflow is designed to process PDF documents using Mistral's OCR capabilities, store the extracted text in a Qdrant vector database, and enable Retrieval-Augmented Generation (RAG) for answering questions. Here’s how it functions: Once configured, the workflow automates document ingestion, vectorization, and intelligent querying, enabling powerful RAG applications. --- Benefits End-to-End Automation No manual interaction is needed: documents are read, processed, and made queryable with minimal setup. Scalable and Modular The workflow uses subflows and batching, making it easy to scale and customize. Multi-Model Support Combines Mistral for OCR, OpenAI for embeddings, and Gemini for intelligent answering—taking advantage of the strengths of each. Real-Time Q\&A With RAG integration, users can query document content through natural language and receive accurate responses grounded in the PDF data. Light or Full Mode Users can choose to index full page content or only summarized text, optimizing for either performance or richness. --- How It Works PDF Processing with Mistral OCR: The workflow starts by uploading a PDF file to Mistral's API, which performs OCR to extract text and metadata. The extracted content is split into manageable chunks (e.g., pages or sections) for further processing. Vector Storage in Qdrant: The extracted text is converted into embeddings using OpenAI's embedding model. These embeddings are stored in a Qdrant vector database, enabling efficient similarity searches for RAG. Question-Answering with RAG: When a user submits a question via a chat interface, the workflow retrieves relevant text chunks from Qdrant using vector similarity. A language model (Google Gemini) generates answers based on the retrieved context, providing accurate and context-aware responses. Optional Summarization: The workflow includes an optional summarization step using Google Gemini to condense the extracted text for faster processing or lighter RAG usage. --- Set Up Steps To deploy this workflow in n8n, follow these steps: Configure Qdrant Database: Replace QDRANTURL and COLLECTION in the "Create collection" and "Refresh collection" nodes with your Qdrant instance details. Ensure the Qdrant collection is configured with the correct vector size (e.g., 1536 for OpenAI embeddings) and distance metric (e.g., Cosine). Set Up Credentials: Add credentials for: Mistral Cloud API (for OCR processing). OpenAI API (for embeddings). Google Gemini API (for chat and summarization). Google Drive (if sourcing PDFs from Drive). Qdrant API (for vector storage). PDF Source Configuration: If using Google Drive, specify the folder ID in the "Search PDFs" node. Alternatively, modify the workflow to accept PDFs from other sources (e.g., direct uploads or external APIs). Customize Text Processing: Adjust chunk size and overlap in the "Token Splitter" node to optimize for your document type. Choose between raw text or summarized content for RAG by toggling between the "Set page" and "Summarization Chain" nodes. Test the RAG: Trigger the workflow manually or via a chat message to verify OCR, embedding, and Qdrant storage. Use the "Question and Answer Chain" node to test query responses. Optional Sub-Workflows: The workflow supports execution as a sub-workflow for batch processing (e.g., handling multiple PDFs). ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Build a chatbot, voice and phone agent with Voiceflow, Google Calendar and RAG
Voiceflow is a no-code platform that allows you to design, prototype, and deploy conversational assistants across multiple channels—such as chat, voice, and phone—with advanced logic and natural language understanding. It supports integration with APIs, webhooks, and even tools like Twilio for phone agents. It's perfect for building customer support agents, voice bots, or intelligent assistants. This workflow connects n8n and Voiceflow with tools like Google Calendar, Qdrant (vector database), OpenAI, and an order tracking API to power a smart, multi-channel conversational agent. There are 3 main webhook endpoints in n8n that Voiceflow interacts with: n8n_order – receives user input related to order tracking, queries an API, and responds with tracking status. n8n_appointment – processes appointment booking, reformats date input using OpenAI, and creates a Google Calendar event. n8n_rag – handles general product/service questions using a RAG (Retrieval-Augmented Generation) system backed by: Google Drive document ingestion, Qdrant vector store for search, and OpenAI models for context-based answers. Each webhook is connected to a corresponding "Capture" block inside Voiceflow, which sends data to n8n and waits for the response. --- How It Works This n8n workflow integrates Voiceflow for chatbot/voice interactions, Google Calendar for appointment scheduling, and RAG (Retrieval-Augmented Generation) for knowledge-based responses. Here’s the flow: Trigger: Three webhooks (n8norder, n8nappointment, n8n_rag) receive inputs from Voiceflow (chat, voice, or phone calls). Each webhook routes requests to specific functions: Order Tracking: Fetches order status via an external API. Appointment Scheduling: Uses OpenAI to parse dates, creates Google Calendar events, and confirms via WhatsApp. RAG System: Queries a Qdrant vector store (populated with Google Drive documents) to answer customer questions using GPT-4. AI Processing: OpenAI Chains: Convert natural language dates to Google Calendar formats and generate responses. RAG Pipeline: Embeds documents (via OpenAI), stores them in Qdrant, and retrieves context-aware answers. Voiceflow Integration: Routes responses back to Voiceflow for multi-channel delivery (chat, voice, or phone). Outputs: Confirmation messages (e.g., "Event created successfully"). Dynamic responses for orders, appointments, or product support. --- Setup Steps Prerequisites: APIs: Google Calendar & Drive OAuth credentials. Qdrant vector database (hosted or cloud). OpenAI API key (for GPT-4 and embeddings). Configuration: Qdrant Setup: Run the "Create collection" and "Refresh collection" nodes to initialize the vector store. Populate it with documents using the Google Drive → Qdrant pipeline (embeddings generated via OpenAI). Voiceflow Webhooks: Link Voiceflow’s "Captures" to n8n’s webhook URLs (n8norder, n8nappointment, n8n_rag). Google Calendar: Authenticate the Google Calendar node and set event templates (e.g., summary, description). RAG System: Configure the Qdrant vector store and OpenAI embeddings nodes. Adjust the Retrieve Agent’s system prompt for domain-specific queries (e.g., electronics store support). Optional: Add Twilio for phone-agent capabilities. Customize OpenAI prompts for tone/accuracy. PS. You can import a Twilio number to assign it to your agent for becoming a Phone Agent --- Need help customizing? Contact me for consulting and support or add me on Linkedin
Publish image & video to multiple social media (X, Instagram, Facebook and more)
This Workflow streamlines the process of publishing posts (image or video) to multiple social media platforms using a unified form and a third-party API service called Upload-Post. The automation starts with a form trigger, allowing users to submit content (text and media) through a simple frontend interface. Users select the platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads), choose the profile name, write a caption, and upload a photo or video. --- How It Works Automates cross-platform social media posting via Upload-Post, handling both images (JPEG) and videos (MP4). Here’s the process: Trigger: A form submission captures user inputs: Platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads). Account (pre-configured profile name). Caption and file (image/video). Optional Facebook Page ID for targeted posting. Routing: The "Video or Photo?" Switch node checks the file’s MIME type: Image: Routes to the "Post photo" HTTP node (uploads via upload_photos API). Video: Routes to the "Post video" HTTP node (uploads via upload API). API Integration: Both nodes send data to Upload-Post.com’s API, including: Caption, account name, platform, and file binary. Facebook ID (if provided). Success/Failure Handling: The "Result Photo/Video" nodes parse the API response. --- Setup Steps Prerequisites: Upload-Post.com API Key: Get it from the API Keys dashboard. Free tier allows 10 uploads/month. Configuration: API Authentication: In the HTTP Request nodes (Post photo/Post video), set the Authorization header: Name: Authorization Value: Apikey YOURAPIKEY_HERE. Form Customization: Adjust the "On form submission" node to: Add/remove platforms (e.g., YouTube when approved). Modify file type restrictions (default: .jpg, .mp4). Account Mapping: Ensure the "Account" field matches profiles configured in Upload-Post.com (e.g., test1, test2). Facebook Page Integration: Optional: Add a Facebook Page ID field for page-specific posts. Testing: Submit test forms with images/videos. Verify API responses and success/failure messages. Optional Enhancements: Add error logging (e.g., save failed attempts to a database). Extend to YouTube once API support is confirmed. --- Key Features: Multi-Platform: Post to 6+ social networks simultaneously. User-Friendly: Simple form interface for non-technical users. Error Handling: Clear feedback for success/failure cases. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
A very simple "Human in the Loop" email response system using AI and IMAP
Functionality This workflow automates the handling of incoming emails by summarizing their content, generating appropriate responses, and validating the responses through a "Human-in-the-Loop" system. It integrates with IMAP email services (e.g., Gmail, Outlook) and uses AI models to streamline the email response process. The workflow ensures that all AI-generated responses are reviewed by a human before being sent, maintaining a high level of professionalism and accuracy. This approach is particularly useful for businesses that receive a high volume of emails and need to respond quickly while ensuring quality control. --- How It Works Email Trigger: The workflow starts with the Email Trigger (IMAP) node, which monitors an email inbox for new messages. When a new email arrives, it triggers the workflow. Email Preprocessing: The Markdown node converts the email's HTML content into plain text for easier processing by the AI models. Email Summarization: The Email Summarization Chain node uses an AI model (OpenAI) to generate a concise summary of the email. The summary is limited to 100 words and is written in a professional tone. Email Response Generation: The Write email node uses an AI model (OpenAI) to draft a professional response to the email. The response is based on the email content and is limited to 100 words. Human-in-the-Loop Approval: The Set Email text node prepares the drafted response for approval. The Approve Email node sends the drafted response to a human approver (e.g., an internal email address) for review. The email includes: The original message. The AI-generated response. The Approved? node checks if the response has been approved by the human reviewer. If approved, the workflow proceeds to send the response; otherwise, it stops. Sending the Response: The Send Email node sends the approved response back to the original sender. --- Key Features Automated Email Summarization: Summarizes incoming emails to provide a quick overview of the content. AI-Powered Response Generation: Drafts professional responses to emails using AI. Human-in-the-Loop Approval: Ensures all AI-generated responses are reviewed and approved by a human before being sent. IMAP Integration: Works with IMAP email services like Gmail and Outlook. Efficient Email Management: Reduces the time and effort required to handle incoming emails while maintaining high-quality responses. This workflow is ideal for businesses looking to automate their email response process while maintaining control over the quality of outgoing communications. It leverages AI to handle repetitive tasks and ensures that all responses are reviewed by a human, providing a balance between automation and human oversight. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Generate Funny AI Videos with Sora 2 and Auto-Publish to TikTok
This automation creates a fully integrated pipeline to generate AI-powered videos, store them, and publish them on TikTok — all automatically. It connects OpenAI Sora 2, and Postiz (for TikTok publishing) to streamline content creation. --- Key Benefits ✅ Full Automation – From text prompt to TikTok upload, everything happens automatically with no manual intervention once set up. ✅ Centralized Control – Google Sheets acts as a simple dashboard to manage prompts, durations, and generated results. ✅ AI-Powered Creativity – Uses OpenAI Sora 2 for realistic video generation and GPT-5 for optimized titles. ✅ Social Media Integration – Seamlessly posts videos to TikTok via Postiz, ready for your audience. ✅ Scalable & Customizable – Can easily be extended to other platforms like YouTube, Instagram, or LinkedIn. ✅ Time-Saving – Eliminates repetitive steps like manual video uploads or caption writing. --- How it works This workflow automates the end-to-end process of generating AI videos and publishing them to TikTok. It is triggered either manually or on a recurring schedule. Trigger & Data Fetch: The workflow starts by checking a specified Form for new entries. It looks for rows where a video has been requested (a "PROMPT" is filled) but not yet generated (the "VIDEO" column is empty). AI Video Generation: For each new prompt found, the workflow sends a request to the Fal.ai Sora 2 model to generate a video. It then enters a polling loop, repeatedly checking the status of the generation request every 60 seconds until the video is "COMPLETED". Post-Processing & Upload: Once the video is ready, the workflow performs several actions in parallel: Fetch Video & Store: It retrieves the final video URL, downloads the video file Generate Title: It uses the OpenAI GPT-4o-mini model to analyze the original prompt and generate an optimized, engaging title for the video. Publish to TikTok: The video file is uploaded to Postiz, a social media scheduling tool, which then automatically publishes it to a connected TikTok channel, using the AI-generated title as the post's caption. --- Set up steps To make this workflow functional, you need to complete the following configuration steps: Prepare the Google Sheet: Create a Form with at least "PROMPT", "DURATION", and "VIDEO" fields. Configure Fal.ai for Video Generation: Create an account at Fal.ai and obtain your API key. In both the "Create Video" and "Get status" HTTP Request nodes, set up the "Header Auth" credential. Set the Name to Authorization and the Value to Key YOURAPIKEY. Set up TikTok Publishing via Postiz: Create an account on Postiz and connect your TikTok account to get a Channel ID. Obtain your Postiz API key. In the "Upload Video to Postiz" and "TikTok" (Postiz) nodes, configure the API credentials. In the "TikTok" node, replace the placeholder "XXX" in the integrationId field with your actual TikTok Channel ID from Postiz. (Optional) Configure AI Title Generation: The "Generate title" node uses OpenAI. Ensure you have valid OpenAI API credentials configured in n8n for this node to work. --- Need help customizing? Contact me for consulting and support or add me on Linkedin. Header 2
Build an OpenAI assistant with Google Drive integration
Workflow Overview This workflow automates the creation and management of a custom OpenAI Assistant for a travel agency ("Travel with us"), leveraging Google Drive for document storage. --- How It Works Create the OpenAI Assistant Node: OpenAI Action: Creates a custom assistant named "Travel with us" Assistant using the gpt-4o-mini model. Instructions: Respond only using the provided document (e.g., agency-specific info). Stay friendly, brief, and focused on travel-related queries. Ignore irrelevant questions politely. Credentials: Requires OpenAI API key. Upload Agency Document Google Drive Node: Action: Downloads a Google Doc as a PDF. OpenAI2 Node: Action: Uploads the PDF to OpenAI with purpose: "assistants". Output: Generates a file_id. Update the Assistant with the Document OpenAI Node: Action: Updates the assistant to include the uploaded file. Chat Interaction Chat Trigger: Activates when a message is received ("When chat message received"). OpenAI Assistant Node: Action: Uses the updated assistant to respond to user queries. Memory: Window Buffer Memory retains chat context for coherent conversations. --- Set Up Steps Prepare the Document: Store your travel agency guide in Google Drive (e.g., as a Google Doc). Update the Google Drive node with your document’s ID. Configure Credentials: Google Drive: Connect via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key to all OpenAI nodes. Customize the Assistant: Modify the instructions in the OpenAI node to reflect your agency’s needs. Ensure the document includes FAQs, policies, and travel info. Test the Workflow: Trigger manually ("Test workflow") to create the assistant and upload the file. Send a chat message (e.g., "What are your travel packages?") to test responses. --- Dependencies Google Drive Account: To store and retrieve the agency document. OpenAI API Access: For assistant creation and file uploads. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Personal shopper chatbot for WooCommerce with RAG using Google Drive and openAI
This workflow combines OpenAI, Retrieval-Augmented Generation (RAG), and WooCommerce to create an intelligent personal shopping assistant. It handles two scenarios: Product Search: Extracts user intent (keywords, price ranges, SKUs) and fetches matching products from WooCommerce. General Inquiries: Answers store-related questions (e.g., opening hours, policies) using RAG and documents stored in Google Drive. --- How It Works Chat Interaction & Intent Detection Chat Trigger: Starts when a user sends a message ("When chat message received"). Information Extractor: Uses OpenAI to analyze the message and determine if the user is searching for a product or asking a general question. Extracts: search (true/false). keyword, priceRange, SKU, category (if product-related). Example: json { "search": true, "keyword": "red handbags", "priceRange": { "min": 50, "max": 100 }, "SKU": "BAG123", "category": "women's accessories" } Product Search (WooCommerce Integration) AI Agent: If search: true, routes the request to the personal_shopper tool. WooCommerce Node: Queries the WooCommerce store using extracted parameters (keyword, priceRange, SKU). Filters products in stock (stockStatus: "instock"). Returns matching products (e.g., "red handbags under €100"). General Inquiries (RAG System) RAG Tool: If search: false, uses the Qdrant Vector Store to retrieve store information from documents. Google Drive Integration: Documents (e.g., store policies, FAQs) are stored in Google Drive. Downloaded, split into chunks, and embedded into Qdrant for semantic search. OpenAI Chat Model: Generates answers based on retrieved documents (e.g., "Our store opens at 9 AM"). Set Up Steps Configure the RAG System Google Drive Setup: Upload store documents . Update the Google Drive2 node with your folder ID. Qdrant Vector Database: Clean the collection (update Qdrant Vector Store node with your URL). Use Embeddings OpenAI to convert documents into vectors. Configure OpenAI & WooCommerce OpenAI Credentials: Add your API key to all OpenAI nodes (OpenAI Chat Model, Embeddings OpenAI, etc.). WooCommerce Integration: Connect your WooCommerce store (credentials in the personal_shopper node). Ensure product data is synced and accessible. Customize the AI Agent Intent Detection: Modify the Information Extractor’s system prompt to align with your store’s terminology. RAG Responses: Update the tool description to reflect your store’s documents. --- Notes This template is ideal for e-commerce businesses needing a hybrid assistant for product discovery and customer support. Need help customizing? Contact me for consulting and support or add me on Linkedin.
Scrape Trustpilot reviews with DeepSeek, analyze sentiment with OpenAI
Workflow Overview This workflow automates the process of scraping Trustpilot reviews, extracting key details, analyzing sentiment, and saving the results to Google Sheets. It uses OpenAI for sentiment analysis and HTML parsing for review extraction. --- How It Works Scrape Trustpilot Reviews HTTP Request: Fetches review pages from Trustpilot (https://it.trustpilot.com/review/{{company_id}}). Paginates through pages (up to max_page limit). HTML Parsing: Extracts review URLs using CSS selectors Splits the URLs into individual review links. Extract Review Details Information Extractor: Uses DeepSeek to extract structured data from the review: Author: Name of the reviewer. Rating: Numeric rating (1-5). Date: Review date in YYYY-MM-DD format. Title: Review title. Text: Full review text. Total Reviews: Number of reviews by the user. Country: Reviewer’s country (2-letter code). Sentiment Analysis Sentiment Analysis Node: Uses OpenAI to classify the review text as Positive, Neutral, or Negative. Example output: json { "category": "Positive", "confidence": 0.95 } Save to Google Sheets Google Sheets Node: Appends or updates the extracted data to a Google Sheet --- Set Up Steps Configure Trustpilot Scraping Edit Fields1 Node: Set company_id to the Trustpilot company name Set max_page to limit the number of pages scraped. Configure Google Sheets Google Sheets Node: Update the documentId with your Google Sheet ID Ensure the sheet has the required columns (Id, Data, Nome, etc.). Configure OpenAI OpenAI Chat Model Node: Add your OpenAI API key. Sentiment Analysis Node: Ensure the categories match your desired sentiment labels (Positive, Neutral, Negative). --- Key Components Nodes: HTTP Request/HTML: Scrape and parse Trustpilot reviews. Information Extractor: Extract structured review data using DeepSeek. Sentiment Analysis: Classify review sentiment. Google Sheets: Save and update review data. Credentials: OpenAI API key. DeepSeek API key. Google Sheets OAuth2.
AI orchestrator: dynamically selects models based on input type
This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. --- Benefits Smart Model Routing: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability: Easily expandable by adding more request types or LLMs. Maintainability: Clear logic separation between classification, model routing, and execution. Personalization: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. --- How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. search: For queries requiring search tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. --- Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, search). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.