130 templates found
Category:
Author:
Sort:

AI voice chatbot with ElevenLabs & OpenAI for customer service and restaurants

The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Create a FREE account on ElevenLabs Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'testchatbotelevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
65772

Analyze any website with OpenAI and get on-page SEO audit

Instantly Find & Fix What’s Holding Your Page Back You’ve put in the work. Your content is strong. Your design is polished. But… ❌ Your page isn’t ranking where it should. ❌ Your competitors are outranking you—even with weaker content. ❌ You have no idea what’s wrong—or how to fix it. The truth? SEO isn’t just about keywords. Your technical setup, content structure, and on-page elements must work together seamlessly. And if anything is off? Google won’t rank your page. Who Is This For? SaaS Founders & Startups – Get higher rankings & organic traffic that converts. Marketing Teams & Agencies – Audit & optimize pages in seconds. E-commerce & Content Sites – Improve rankings for product pages, blogs, and landing pages. How It Works Paste your URL Get an instant audit + recommendations list Implement changes & watch your rankings jump The workflow scrapes the url you input, gets the htlm source code of the landing page, and sends it to OpenAI AI Agent. The Agent makes a deep analysis, audit the Technical + Content SEO of the page, and provides 10 Recommendations to improve your SEO. Setup Guide You will need OpenAI Credentials with an API Key to run the workflow. The workflow is using the OpenAI-o1 model to deliver the best results. It costs between $0.20/0.30 per run. You can adjust the prompt to your wish in the AI Agent parameters. Once the audit has been completed, it will send an email (don't forget to add your email address here) Below is an example of what you can expect

Not Another MarketerBy Not Another Marketer
57740

Building RAG chatbot for movie recommendations with Qdrant and Open AI

Create a recommendation tool without hallucinations based on RAG with the Qdrant Vector database. This example is based on movie recommendations on the IMDB-top1000 dataset. You can provide your wishes and your "big no's" to the chatbot, for example: "A movie about wizards but not Harry Potter", and get top-3 recommendations. How it works a video with the full design process Upload IMDB-1000 dataset to Qdrant Vector Store, embedding movie descriptions with OpenAI; Set up an AI agent with a chat. This agent will call a workflow tool to get movie recommendations based on a request written in the chat; Create a workflow which calls Qdrant's Recommendation API to retrieve top-3 recommendations of movies based on your positive and negative examples. Set Up Steps You'll need to create a free tier Qdrant Cluster (Qdrant can also be used locally; it's open-sourced) and set up API credentials You'll OpenAI credentials You'll need GitHub credentials & to upload the IMDB Kaggle dataset to your GitHub.

Jenny By Jenny
35109

Google Maps business scraper with contact extraction via Apify and Firecrawl

Who is this for? Marketing agencies, sales teams, lead generation specialists, and business development professionals who need to build comprehensive business databases with contact information for outreach campaigns across any industry. What problem is this workflow solving? Finding businesses and their contact details manually is time-consuming and inefficient. This workflow automates the entire process of discovering businesses through Google Maps and extracting their digital contact information from websites, saving hours of manual research. What this workflow does This automated workflow runs every 30 minutes to: Scrape business data from Google Maps using Apify's Google Places crawler Save basic business information (name, address, phone, website) to Google Sheets Filter businesses that have websites Scrape each business's website content using Firecrawl Extract contact information including emails, LinkedIn, Facebook, Instagram, and Twitter profiles Store all extracted data in organized Google Sheets for easy access and follow-up Setup Required Services: Google Sheets account with OAuth2 setup Apify account with API access for Google Places scraping Firecrawl account with API access for website scraping Pre-setup: Copy this Google Sheet Configure your Apify and Firecrawl API credentials in n8n Set up Google Sheets OAuth2 connection Update the Google Sheet ID in all Google Sheets nodes Quick Start: The workflow includes detailed sticky notes explaining each phase. Simply configure your API credentials and Google Sheet, then activate the workflow. How to customize this workflow to your needs Change search criteria: Modify the Apify scraping parameters to target different business types (restaurants, gyms, salons, etc.) or locations Adjust schedule: Change the trigger interval from 30 minutes to your preferred frequency Add more contact fields: Extend the extraction code to find additional contact information like WhatsApp or Telegram Filter criteria: Modify the filter conditions to target businesses with specific characteristics Batch size: Adjust the batch processing to handle more or fewer websites simultaneously Perfect for lead generation, competitor research, and building targeted marketing lists across any industry or business type.

Naveen ChoudharyBy Naveen Choudhary
24942

WordPress - AI chatbot to enhance user experience - with Supabase and OpenAI

This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2: Handle upserts for WordPress content when edits are made. Workflow 3: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8nwebsiteembedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content Updated content Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8nwebsiteembedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.

DatakiBy Dataki
12661

Enrich Linkedin profiles from Google Sheets via RapidAPI

LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Real-Time Data Enrichment API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria: Modify the filter step to include additional conditions for processing profiles. API Configuration: Adjust API parameters to retrieve specific fields or extend usage. Output Format: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.

PollupAIBy PollupAI
11961

AI YouTube analytics agent: Comment analyzer & insights reporter

Transform YouTube comments into actionable insights with automated AI analysis and professional email reports. This intelligent workflow monitors your Google Sheets for YouTube video IDs, fetches comments using YouTube API, performs comprehensive AI sentiment analysis, and delivers formatted email reports with viewer insights - helping content creators understand their audience and improve engagement. 🚀 What It Does Smart Video Monitoring: Watches Google Sheets for new YouTube video IDs marked as "Pending" and triggers automated analysis Complete Comment Collection: Fetches up to 100 top comments per video using YouTube API with relevance-based ordering AI-Powered Analysis: Uses GPT-4 to analyze comments for sentiment, themes, questions, feedback, and actionable insights Professional Email Reports: Generates detailed HTML reports with statistics, sentiment breakdown, and improvement recommendations Automated Status Tracking: Updates spreadsheet status to prevent duplicate processing and maintain organized workflow 🎯 Key Benefits ✅ Deep Audience Insights: Understand what viewers really think about your content ✅ Save Hours of Manual Work: Automated comment analysis vs reading hundreds of comments ✅ Improve Content Strategy: Get actionable feedback for better video performance ✅ Track Sentiment Trends: Monitor positive/negative feedback patterns ✅ Professional Reporting: Receive formatted analysis reports via email ✅ Scalable Analysis: Process multiple videos automatically 🏢 Perfect For Content Creators & YouTubers Individual creators tracking audience engagement Educational channels analyzing learning feedback Entertainment creators understanding viewer preferences Business channels monitoring brand sentiment Marketing & Business Applications Brand Monitoring: Track sentiment on branded content and partnerships Audience Research: Understand viewer demographics and preferences Content Optimization: Identify what resonates with your audience Competitor Analysis: Analyze comments on competitor videos (where allowed) ⚙️ What's Included Complete Analytics Workflow: Ready-to-deploy YouTube comment analysis system Google Sheets Integration: Simple spreadsheet-based video management YouTube API Integration: Automated comment fetching with proper authentication AI Analysis Engine: GPT-4 powered sentiment and insight generation Email Reporting System: Professional HTML-formatted reports Status Management: Automatic processing tracking and duplicate prevention 🔧 Setup Requirements n8n Platform: Cloud or self-hosted instance YouTube API Credentials: Google Cloud Console API access OpenAI API: GPT-4 access for comment analysis Google Sheets: Video ID management and status tracking Gmail Account: For receiving analysis reports 📊 Required Google Sheets Structure | ID | Video Title | YouTube Video ID | Status | |----|-------------|------------------|---------| | 1 | My Tutorial | dQw4w9WgXcQ | Pending | | 2 | Product Demo| abc123def456 | Mail Sent | | 3 | Weekly Vlog | xyz789uvw012 | Draft | Status Options: Draft → Pending → Mail Sent 📧 Sample Analysis Report 📺 YouTube Comments Analysis Report Video: "How to Build Your First Website" 📊 Quick Statistics: • Total Comments Analyzed: 87 • Average Likes per Comment: 3.2 • Total Replies: 156 • Sentiment Summary: Positive: 65%, Negative: 10%, Neutral: 25% ❓ Common Questions: • "What hosting service do you recommend?" • "Can I do this without coding experience?" • "How much does domain registration cost?" 💡 Key Feedback Points: • Tutorial pace is perfect for beginners • More examples of finished websites requested • Viewers want follow-up video on advanced features 🎯 Actionable Insights: • Create hosting comparison video • Add timestamps for different skill levels • Consider beginner-friendly series expansion 🎨 Customization Options Analysis Depth: Adjust AI prompts for different analysis focuses (engagement, education, entertainment) Comment Limits: Modify maximum comments processed (default: 100, AI analysis: 50) Report Recipients: Send reports to multiple team members or clients Custom Metrics: Add specific analysis criteria for your content niche Multi-Channel: Process videos from multiple YouTube channels Scheduling: Set up regular analysis of your latest videos 🏷️ Tags & Categories youtube-analytics comment-analysis content-creator-tools ai-sentiment-analysis video-insights audience-research youtube-api content-optimization social-media-analytics creator-economy video-marketing engagement-analysis content-strategy ai-reporting youtube-automation 💡 Use Case Examples Educational Channel: Analyze tutorial comments to identify confusing concepts and improve teaching methods Product Reviews: Monitor sentiment on review videos to understand customer satisfaction trends Entertainment Creator: Track audience reactions to different content formats and optimize future videos

Yaron BeenBy Yaron Been
10591

Control your n8n instance remotely with Telegram bot commands

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Your n8n Command Center in a Telegram Chat Remotely manage and operate your n8n instance from Telegram with powerful admin commands. This workflow connects your n8n instance with a Telegram Bot, giving you remote control over key admin operations through simple chat commands. 📱 You can List your workflows (workflows) Execute a workflow (execute [name]) Activate/deactivate workflows (activate [name], deactivate [name]) List past executions (executions [name]) Permanently delete archived workflows (cleanup) Create backups of all your workflows and credentials (backup) Get help (help) Get notified when a workflow fails and when n8n instance starts. This is especially useful for self-hosted instances when you want quick access to your automation environment from your mobile device. 📌 Notes backup only works on self-hosted setups. execute, activate, deactivate, and executions require the workflow name as argument. Workflows must contain the appropriate trigger nodes to be executed or activated. Commands and arguments are not case sensitive, there is no need to prefix with slash and spaces in the argument name are supported. ⚙️ Setup Create your credentials for Telegram API and n8n API. Edit all Telegram and n8n nodes. Select your credentials on them. On telegram nodes provide your chatid. Detailed step-by-step instructions are available in the workflow notes. In each workflow that fails and you want to receive a warning, configure this workflow as Error Workflow in its settings.

Arthur BraghettoBy Arthur Braghetto
8436

Turn emails into AI-enhanced tasks in Notion (multi-user support) with Gmail, Airtable and Softr

Purpose This workflow automatically creates Tasks from forwarded Emails, similar to Asana, but better. Emails are processed by AI and converted to rather actionable task. In addition this workflow is build in a way, that multiple users can share this single process by setting up their individual configuration through a user friendly portal (internal tool) instead of the need to manage their own workflows. Demo [](https://youtu.be/7cIvSqJAY0E) How it works One Gmail account is used to process inbound mails from different users. A custom web portal enables users to define “routes”. Thats where the mapping between an automatically generated Gmail Alias and a Notion Database URL, including the personal API Token, happens. Using a Gmail Trigger, new entries are split by the Email Alias, so the corresponding route can be retrieved from the Database connected to the portal. Every Email then gets processed by AI to get generate an actionable task and get a short summary of the original Email as well as some metadata. Based on a predefined structure a new Page is created in the corresponding Notion Database. Finally the Email is marked as “processed” in Gmail. If an error happens, the route gets paused for a possible overflow and the user gets notified by Email. Setup Create a new Google account (alternatively you can use an existing one and set up rules to keep your inbox organized) Create two Labels in Gmail: “Processed” and “Error” Clone this Softr template including the Airtable dataset and publish the application Clone this workflow and choose credentials (Gmail, Airtable) Follow the additional instructions provided within the workflow notes Enable the workflow, so it runs automatically in the background How to use Open published Softr application Register as a new user Create a new route containing the Notion API key and the Notion Database URL Expand the new entry to copy the Email address Save the address as a new contact in your Email provider of choice Forward an Email to it and watch how it gets converted to an actionable task Disclamer Airtable was chosen, so you can setup this template fairly quickly. It is advised to replace the persistence by something you own, like a self hosted SQL server, since we are dealing with sensitive information of multiple users This solution is only meant for building internal tools, unless you own an embed license for n8n.

MarioBy Mario
8397

Create multilingual voice calling bot with GPT-4o, ElevenLabs & Twilio

AI Voice Calling Bot - OpenAI GPT-4o + ElevenLabs + Twilio Integration for Multilingual Appointment Booking & Service Orders Overview Transform your business with an intelligent voice calling bot that handles customer calls automatically in 25+ languages. This N8n workflow integrates OpenAI GPT-4o, ElevenLabs text-to-speech, and Twilio for seamless appointment scheduling, pizza orders, and service bookings. Key Features Multilingual Support: Conversations in English, Spanish, French, German, Italian, Portuguese, Chinese, Japanese, Arabic, and 20+ more languages Natural AI Conversations: GPT-4o powered responses with ElevenLabs realistic voice synthesis Multi-Service Handling: Appointments, orders, and service requests with automatic logging Real-time Processing: Instant speech-to-text and audio response generation Prerequisites N8n instance (self-hosted or cloud) Twilio account with phone number OpenAI API key (GPT-4o access) ElevenLabs API credentials Google Sheets access Cloud storage for audio files Setup Instructions Step 1: Configure Credentials Add API keys for OpenAI, ElevenLabs, Twilio, and Google Sheets in N8n credentials manager. Step 2: Prepare Data Storage Create Google Sheets for call logs and appointments with columns: timestamp, callerid, speechinput, airesponse, language, callsid. Step 3: Configure Twilio Set webhook URL to your N8n endpoint: https://your-n8n-instance.com/webhook/voice-webhook Step 4: Update Sheet IDs Replace placeholder Google Sheet IDs in workflow nodes with your actual sheet IDs. Customization Options Voice Settings: Adjust ElevenLabs multilingual voice models and parameters AI Behavior: Modify system prompts for specific business needs and languages Service Types: Add custom service handling logic Business Hours: Implement language-specific operating hours Monitoring Track call analytics, language preferences, conversion rates, and customer satisfaction across all supported languages through automated Google Sheets logging. Ready for production use with comprehensive error handling and scalability for global businesses.

ShivaBy Shiva
6399

Backup your workflows to GitHub -- in (subfolders)

Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.

NazmyBy Nazmy
5851

AI news research team: 24/7 newsletter automation with citations with Perplexity

[](https://youtu.be/sKJAypXDTLA) Purpose of workflow: This AI-powered workflow automatically generates detailed, well-researched newsletters by monitoring and analyzing specified news topics (like Bitcoin, Nvidia, etc.). It uses a team of AI research agents to gather, analyze, and compile information with automatic citations, saving significant time in newsletter creation. How it works: Multi-agent system: Research Leader: Analyzes topics and creates content outline Project Planner: Breaks down research into specific tasks Research Assistants: Conduct detailed research on assigned subtopics Editor: Combines research and polishes final output Key features: Automated daily monitoring of specified news topics Real-time information gathering using Perplexity AI Auto-citation functionality for source verification Flexible time window filtering (day/week/month) Options for detailed or simple reports Direct email delivery of completed newsletters Step-by-step setup: Perplexity API Setup: Create account at perplexity.ai Navigate to API tab Generate API key Set up credentials with 'Bearer' authentication Workflow Configuration: Connect Google Sheets containing news monitoring topics Configure schedule trigger for daily execution Set up email delivery settings Define report type preferences (detailed/simple) Specify time window for news gathering Integration: Connect with newsletter tools like Kit Import generated content as starting point Edit and customize as needed before publishing

Derek CheungBy Derek Cheung
5813