AI voice chatbot with ElevenLabs & OpenAI for customer service and restaurants
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Create a FREE account on ElevenLabs Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'testchatbotelevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies. ---- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Phishing analysis - URLScan.io and VirusTotal
This n8n workflow automates the analysis of email messages received in a Microsoft Outlook inbox to identify indicators of compromise (IOCs), specifically suspicious URLs. It can be triggered manually or scheduled to run daily at midnight. The workflow begins by retrieving up to 100 read email messages from the Outlook inbox. However, there seems to be a configuration issue as it should retrieve unread messages, not read ones. It then marks these messages as read to avoid processing them again in the future. The messages are then split into individual items using the Split In Batches node for sequential processing. For each email, the workflow analyzes its content to find URLs, which are considered potential IOCs. If URLs are found, the workflow proceeds to check these URLs for potential threats using two services, URLScan.io and VirusTotal, in parallel. In the first path, URLScan.io scans each URL, and if there are no errors, the results from URLScan.io and VirusTotal are merged. If there are errors, the workflow waits 1 minute before attempting to retrieve the URLScan results again. The loop then continues for the next email. In the second path, VirusTotal is used to scan the URLs, and the results are retrieved. Finally, the workflow checks if the data field is not empty, filtering out items where no data was found. It then sends a summarized Slack message to report details about the analyzed email, including the subject, sender, date, URLScan report URL, and VirusTotal verdict for URLs that were reported as malicious. Potential issues during setup include configuring the Outlook node to retrieve unread messages, resolving a configuration issue in the VirusTotal node, and handling authentication and API keys for both URLScan.io and VirusTotal nodes. Additionally, proper error handling and testing with various email content types and URLs are essential to ensure the workflow accurately identifies IOCs and reports them to the Slack channel.
Read in an Excel spreadsheet file
How to take the path of a local Excel file and read its contents into n8n.
Generate Youtube video metadata (timestamps, tags, description, ...)
For Who? Content Creators Youtube Automation Marketing Team --- How it works? 1 - Enter the ID of the YTB channel to trigger the workflow when a new video is posted 2 - Apify scrape the last YTB video of the channel 3 - Wait until the dataset is completed in Apify and get it 4 - Verify if Metadata are not already generated and generate them with LLM 5 - Format all the data created and update YTB Video 📺 YouTube Video Tutorial: [](https://www.youtube.com/watch?v=HaQPAa6l5bU) --- SETUP Setup Input YTB Chanel : Go to the channel's page on YouTube, and look at the URL of the page. The channel ID is the value that comes after channel/ in the URL. Add it after "?channel_id=" You can also use free tools available to retrieve channel ID. Setup Output YTB Video Update : Connect your YTB account to your n8n instance thanks to the Google Cloud Console. You can find tutorials by typing "youtube api Oauth" on Google. APIs : For the following third-party integrations, replace ==[YOURAPITOKEN]== with your API Token or connect your account via Client ID / Secret to your n8n instance : Apify : https://docs.apify.com/api/v2/getting-started Youtube : https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.youtube/?utmsource=n8napp&utmmedium=nodesettingsmodal-credentiallink&utm_campaign=n8n-nodes-base.youTubetemplates-and-examples --- 👨💻 More Workflows : https://n8n.io/creators/nasser/
WordPress - AI chatbot to enhance user experience - with Supabase and OpenAI
This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2: Handle upserts for WordPress content when edits are made. Workflow 3: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8nwebsiteembedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content Updated content Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8nwebsiteembedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.
Monitor competitors' websites for changes with OpenAI and Firecrawl
Who is this template for? This workflow template is designed for people seeking alerts when certain specific changes are made to any web page. Leveraging agentic AI, it analyzes the page every day and autonomously decides whether to send you an e-mail notification. Example use cases Track price changes on [competitor's website]. Notify me when the price drops below €50. Monitor new blog posts on [industry leader's website] and summarize key insights. Check [competitor's job page] for new job postings related to software development. Watch for new product launches on [e-commerce site] and send me a summary. Detect any changes in the terms and conditions of [specific website]. Track customer reviews for [specific product] on [review site] and extract key themes. How it works When clicking 'test workflow' in the editor, a new browser tab will open where you can fill in the details of your espionage assignment Make sure you be as concise as possible when instructing AI. Instruct specific and to the point (see examples at the bottom). After submission, the flow will start off by extracting both the relevant website url and an optimized prompt. OpenAI's structured outputs is utilized, followed by a code node to parse the results for further use. From here on, the endless loop of daily checks will begin: Initial scrape 1 day delay Second scrape AI agent decides whether or not to notify you Back to step 1 You can cancel an espionage assignment at any time in the executions tab Set up steps Insert your OpenAI API key in the structured outputs node (second one) Create a Firecrawl account and connect your Firecrawl API key in both 'Scrape page'-nodes Connect your OpenAI account in the AI agents' model node Connect your Gmail account in the AI agents' Gmail tool node
Sync new data between two apps
This template shows how to sync data from one service to another. Specifically, in this example we're saving a new qualified lead from a Postgres database to a Google Sheets file. Setup instructions are located inside the workflow template.
Voiceflow demo support chatbot
Submission Overview for Voiceflow Demo Workflow View the YouTube video for this workflow here. Who is this for? This workflow is ideal for businesses and developers using Voiceflow to power AI voice chatbots. It benefits teams that want to enhance chatbot functionality through integrations with platforms like Zendesk, Google Calendar, and Airtable. What problem is this workflow solving? The workflow addresses the need for seamless integration of chatbot interactions with backend systems. It automates customer service tasks such as ticket creation, meeting scheduling, and data reporting, reducing manual effort and enhancing efficiency. What does this workflow do? Customer Lookup: Checks the database for existing customers and returns relevant details or a "NOT_FOUND" status. Zendesk Ticket Creation: Automates the creation of support tickets for customer issues. Meeting Scheduling: Integrates with Google Calendar to provide availability and schedule meetings. Transcript Reporting: Aggregates interaction data and sends it to Airtable for analysis by the product team. Setup Configure your Voiceflow chatbot to connect to this workflow via a webhook. Set up the required integrations: Zendesk API: For ticket creation. Google Calendar API: For scheduling. Airtable API: For storing transcripts. Customize the workflow's nodes to match your use case, such as database fields or API endpoints. Deploy the workflow on your n8n instance and test the integrations. How to customize this workflow to your needs Adjust database queries to match your customer data schema. Modify the Zendesk ticket payload to include additional fields or custom formats. Update Google Calendar configurations for different scheduling requirements. Add or remove Airtable fields based on the product team's analysis needs. This template adheres to n8n’s submission guidelines, ensuring clarity, relevance, and broad applicability for users in customer service, product development, and automation.
Build your own PostgreSQL MCP server
This n8n demonstrates how to build a simple PostgreSQL MCP server to manage your PostgreSQL database such as HR, Payroll, Sale, Inventory and More! This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/postgres How it works A MCP server trigger is used and connected to 5 tools: 2 postgreSQL and 3 custom workflow. The 2 postgreSQL tools are simple read-only queries and as such, the postgreSQL tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our standard PostgreSQL node to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This PostgreSQL MCP server allows any compatible MCP client to manage a PostgreSQL database by supporting select, create and update operations. You will need to have a database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me check if Alex has an entry in the users table. If not, please help me create a record for her." "What was the top selling product in the last week?" "How many high priority support tickets are still open this morning?" Requirements PostgreSQL for database. This can be an external database such as Supabase or one you can host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
Create faceless videos with Gemini, ElevenLabs, Leonardo AI & Shotstack
This n8n template demonstrates walks you through a fully automated process to generate faceless videos - from script creation to final download - using AI-generated voice, images, and smart video editing. Use cases are many: This tool is perfect for YouTube and Shorts creators who want to publish daily content without showing their face, TikTok and Reels marketers automating voice-over-driven videos, and solopreneurs scaling up their content without hiring a team. It’s also ideal for agencies producing batches of faceless video ads, automation enthusiasts building smart media workflows in n8n, and anyone who’s rich in ideas but tired of spending hours editing. How It Works Phase 1: Provide Topic Input A short topic and idea should be entered into the Idea part in Node Fields - Set Idea inside the workflow in n8n. Trigger the process manually by clicking Test Workflow or Execute Workflow. Phase 2: Script Generation Your idea is passed to Google Gemini's chat model. The model returns a concise, 60-second faceless video script. The script is then reformatted into a structured layout optimized for voice generation and visual synchronization. Phase 3: Audio Generation The formatted script is passed to ElevenLabs, which turns the text into a high-quality voiceover audio. The generated audio is uploaded to Google Drive and made publicly accessible. At the same time, the audio is sent to OpenAI Whisper via a POST request to generate a transcription of the voiceover. Phase 4: Timestamps Generation The tool merges the original script and the OpenAI Whisper-generated transcription. The merged data is passed to Google Gemini's chat model to generate image prompts with precise timestamps. The output is parsed and cleaned using a structured parser to ensure it's in ready-to-use JSON format for image generation. Phase 5: Images Generation The full list of timestamped prompts is is split into individual entries. Each prompt is sent to Leonardo's API that turns text descriptions into visuals. A delay of 30 seconds is added to give the image generation engine enough time to complete rendering. Once completed, the workflow retrieves all final images for the next stage. Phase 6: Images To Video Conversion All generated images are sent to Leonardo's API, which stitches them together based on the structured prompts and timing. A 5-minute wait allows time for rendering. After the wait, the workflow retrieves the generated small videos and makes them downloadable. Then, the tool aggregates all downloaded videos into a single unified structure, preparing them for the final editing. Phase 7: Video Editing and Downloading The raw video, along with timestamps or subtitles, is sent to Shotstack, a video editing tool that supports advanced edits. A delay of 1 minute allows Shotstack to process the edit. Then, the tool checks whether the edited video is finished by Shotstack and ready to be downloaded. Once completed, you can download the final polished video to your local storage for later use. How To Use Download the workflow package. Import the package into your n8n interface. Set up necessary credentials for tools access and usability: For Google Gemini access, please connect to its API in the following nodes: Node Google Gemini Chat Model 1 Node Google Gemini Chat Model 2 For Google Drive access, please ensure connection in the following nodes: Node Upload Audio to Drive Node Make Audio File Public For ElevenLabs access, please connect to its API in the following node: Node Generate Voice For OpenAI Whisper access, please connect to its API in the following node: Node Transcribe Audio with OpenAI Whisper For Leonardo access, please allow connection to its API in the following nodes: Node Generate Images Node Generate Videos/Scenes For Shortstack access, please connect to its API in the following nodes: Node Edit with Shotstack Node Render Final Video with Shotstack Input your video idea or short description as a string in Node Fields - Set Idea in n8n. Run the workflow by clicking Execute Workflow or Test Workflow. Wait the process to run and finish. View the result in Node Download Final Video and download it in your local storage for later use. Requirements Basic setup in Google Cloud Console (OAuth or API Key method enabled) with enabled access to Google Drive. Google Gemini API access with permission to use chat-based large language models. ElevenLabs API access for generating high-quality voiceovers from scripts. OpenAI Whisper API access to transcribe voiceovers into clean text. Leonardo API access for both image and video generation tasks. Shotstack API access for editing and rendering the final video with enhanced visuals and timing. How To Customize You can input your requested video topic or description directly in Node Fields – Set Idea. By default, the script length is set to around 60 seconds in Node 60 Second Script Writer. You can easily change this in the prompt to create shorter or longer videos based on your needs. While the default setup uses Google Gemini for script and prompt generation, you can replace it with OpenAI ChatGPT, Claude, or any other compatible chat-based model you prefer. The voiceover is currently created using ElevenLabs, but you’re free to substitute it with other text-to-speech engines like Google Cloud Text-to-Speech, HeyGen, etc. We're using OpenAI Whisper to transcribe the voiceover into text. You can switch to alternatives such as AssemblyAI, Deepgram, or other compatible providers depending on your preference. This workflow uses Leonardo for both image and video generation. You can swap it out for other compatible providers based on availability or style preference. Video editing is handled by Shotstack by default. You can plug in alternatives like Runway, FFmpeg, or other API-based editors depending on your editing needs or desired effects. If you’d like this workflow customized to fit your tools and platforms availability, or if you’re looking to build a tailored AI Agent for your own business - please feel free to reach out to Agent Circle. We’re always here to support and help you to bring automation ideas to life. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
Generate and insert data into a Postgres database
This is Workflow 1 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials. Basic knowledge of JavaScript and SQL. Nodes Cron node starts the workflow every minute. Function node generates sensor data (sensor id (preset), a randomly generated value, timestamp, and notification (preset as false) ) Postgres node inserts the data into a Postgres database. You can create the database for this workflow with the following SQL statement: SQL CREATE TABLE n8n (id SERIAL, sensorid VARCHAR, value INT, timestamp TIMESTAMP, notification BOOLEAN);
Automate company research using ProspectLens and Google Sheets
This n8n workflow automates the process of researching companies by gathering relevant data such as traffic volume, foundation details, funding information, founders, and more. The workflow leverages the ProspectLens API, which is particularly useful for researching companies commonly found on Crunchbase and LinkedIn. ProspectLens is an API that provides very detailed company data. All you need to do is supply the company's domain name. You can obtain your ProspectLens API key here: https://apiroad.net/marketplace/apis/prospectlens In n8n, create a new "HTTP Header" credential. Set x-apiroad-key as the "Name" and enter your APIRoad API key as the "Value". Use this credential in the HTTP Request node of the workflow.