Gmail to vector embeddings with PGVector and Ollama
Gmail to Vector Embeddings with PGVector and Ollama Who is this for? Everyone! Did you dream of asking an AI "what hotel did I stay in for holidays last summer?" or "what were my marks last semester like?". Dream no more, as vector similarity searches and this workflow are the foundations to make it possible (as long as the information appears in your e-mails 😅). 100% local This workflow is designed to use locally-hosted open source. Ollama as LLM provider, nomic-embed-text as the embeddings model, and pgvector as the vector database engine, on top of Postgres. But.. how?! Firstly, specify the date you created your Gmail account on, then manually run the workflow in order to bulk read all your e-mail in monthly batches. Your database is now populated! Now it's the task for other workflows to query the vector database. Activate the workflow so that new e-mail is continuously added by the Gmail Trigger upon receiving it. Structured AND Vectorized This workflow stores your e-mail activity in two ways: In a structured table In a vector embeddings table And the information in both of them can be correlated by Gmail's messages id, which is stored in the vectors table as metadata property emails_metadata.id. That way consumers can benefit from both worlds! ✨ Vector similarity searches enable semantic searches, while structured queries can retrieve more factual data like the message id, its date or who it came from. Other useful templates My template Chat with Your Email History using Telegram, Mistral and Pgvector for RAG is a ready-made solution to consume this workflow. You may also pair this workflow with my other template to Email Assistant: Convert Natural Language to SQL Queries with Phi4-mini and PostgreSQL and you'll enable RAG workflows that use both structured and vectorized databases. Customizations I suppose the e-mail provider could be changed, but then you'd have to identify an alternative id field. Message-ID would be a more standard option. There are a few opinionated choices as to what metadata to store, but those shouldn't need adjustments.
Generate multispeaker podcast 🎙️ with AI natural-sounding 🤖🧠 & Google Sheets
This workflow automates the generation of multi-speaker podcasts using AI-powered text-to-speech technology. It starts by retrieving a podcast script from a Google Sheets document, where each speaker’s lines are clearly defined. The workflow then processes the script, generates a natural-sounding audio file with different voices for each speaker, and stores the final audio file in Google Drive. The workflow is designed to save time and resources by automating the podcast production process, making it ideal for content creators, marketers, and businesses that need to produce high-quality audio content regularly. How It Works Triggering the Workflow: The workflow starts with the When clicking ‘Test workflow’ node, which can be triggered manually to begin the process. Data Retrieval: The Get Podcast text node retrieves data from a Google Sheets document containing the podcast script. The document includes columns for the speaker's name and the corresponding text. Data Aggregation: The Get all rows node aggregates the data from the Google Sheets document, combining the speaker names and their respective text into a single dataset. Text Formatting: The Full Podcast Text node processes the aggregated data, formatting it into a single string where each speaker's text is prefixed with their name. Audio Generation: The Create Audio node sends a request to the API to generate a multi-speaker podcast audio file. The request includes the formatted text and specifies the voices for each speaker. When you register for the API service you will get 1$ for free. For continuous work add API credits to your account. Status Check: The Get status node checks the status of the audio generation request. If the status is "COMPLETED", the workflow proceeds; otherwise, it waits again. Audio Retrieval: The Get Url Audio node retrieves the URL of the generated audio file. The Get File Audio node downloads the audio file from the provided URL. Audio Storage: The Upload Audio node uploads the generated audio file to a specified Google Drive folder for storage. --- Key Features Multi-Speaker Support: Generates podcasts with different voices for each speaker, creating a more dynamic and engaging listening experience. Google Sheets Integration: Retrieves podcast scripts from a Google Sheets document, making it easy to manage and update content. AI-Powered Text-to-Speech: Uses advanced AI models to generate natural-sounding audio from text. Automated Audio Generation: Streamlines the process of creating podcast audio files, reducing the need for manual recording and editing. Google Drive Storage: Automatically uploads the generated audio files to Google Drive for easy access and sharing. This workflow is ideal for businesses and content creators looking to automate the production of multi-speaker podcasts. It leverages AI to handle the complex task of generating natural-sounding audio, allowing users to focus on creating compelling content. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.
Provide real-time updates for Notion databases via webhooks with Supabase
Purpose This enables webhooks for nearly realtime updates (every 5 seconds) from Notion Databases. Problem Notion does not offer webhooks. Even worse, the “Last edited time” property, we could use for polling, only updates every minute. This gives us a polling interval only as low as 2 minutes and we still need to implement a comparing mechanism to detect changes. Solution This workflow caches states in between while doing efficient polling & comparing. It brings down the update latency from 2 minutes to 5 seconds and also provides the output of the changes only. Demo [](https://youtu.be/BROsXafy9Uw) How it works Database Pages are frequently polled while filtered by a last modified time stamp for more efficiency Retrieved pages get compared with previously cached versions in Supabase Only new and changed pages are pushed to a registered webhook Setup Create a new project in Supabase and import the DB schema (provided through Gumroad) Add a "Last edited time" property to your Notion Database, if it has none yet Define the dynamically generated settings_id from the settings table (Supabase) in the Globals node Define the Notion Database URL in the Globals node Define your custom Webhook URL in the last node where the results should be pushed to It is recommended to call this workflow using this template to prevent simultaneous workflow executions Set the Schedule Trigger to every 5 seconds or less frequent More detailed instructions provided within the workflow file and the illustrated instructions provided during the download Example output json [ { "action": "changed", "changes": { "propertymodifiedat": "2024-06-04T17:59:00.000Z", "property_priority": "important" }, "data": { "id": "ba761e03-7d6d-44c2-8e8d-c8a4fb930d0f", "name": "Try out n8n", "url": "https://www.notion.so/Try-out-n8n-ba761e037d6d44c28e8dc8a4fb930d0f", "propertytodoistid": "", "property_id": "ba761e037d6d44c28e8dc8a4fb930d0f", "propertymodifiedat": "2024-06-04T17:59:00.000Z", "property_status": "Backlog", "property_priority": "important", "property_due": { "start": "2024-06-05", "end": null, "time_zone": null }, "property_focus": false, "property_name": "Try out n8n" }, "updated_at": "2024-06-04T17:59:42.144+00:00" } ]
Public webhook relay
Disclaimer This template only works on n8n local instances! How it Works This workflow allows you to to receive webhooks from the public web and have your local workflow catch them, without any remote proxy. It is very useful for running quick tests without exposing your dev server. All you have to do is activate the workflow and use the public address as defined below. Set up steps If you use the default key-value storage, there are only three steps: Install the @horka.tv/n8n-nodes-storage-kv community node Put your n8n workflow address in Local Webhook Address Activate the workflow and, from Executions, note down your public webhook token from the inputs to Get Latest Requests. You can now use https://webhook.site/[YOUR TOKEN] as a webhook destination, to receive webhook requests from the public web.
Gathering tasks in Typeform and send to ClickUp
Using Typeform to push task requests to an n8n webhook that then categorizes the request and assigns it in ClickUp accordingly. In order to get his workflow working for yourself, you will need: ClickUp account Typeform account Credentials for these services ClickUp configured with appropriate lists Typeform setup with options that correspond with ClickUp lists. You may modify this workflow to meet your specific needs and configuration. This is a very simple version of this workflow and you can make it as complicated as you wish to meet your requirements.
Sync new Shopify customers to Odoo contacts
This workflow functions by integrating Shopify customers into Odoo customers. Trigger: Shopify – New Customer Created The workflow starts when a new customer is added in Shopify. Action: Odoo – Search Contact by Email It checks in Odoo to see if a contact already exists with the same email address as the Shopify customer. Condition: Email Match Check If a contact with the same email is found, the workflow ends (no duplicate contact is created). If no match is found, the workflow proceeds to the next step. Action: Odoo – Create New Contact A new contact is created in Odoo using the customer's: Full name Email address Phone number Full Address (whichever is available)
Receive a Mattermost message when new record gets added to Notion
This workflow allows you to receive a Mattermost message when meeting notes get added to the Notion. Prerequisites Create a table in Notion similar to this: Meeting Notes Follow the steps mentioned in the documentation, to create credentials for the Notion Trigger node. Create create credentials for Mattermost. Notion Trigger: The Notion Trigger node will trigger the workflow when new data gets added to Notion. IF node: This node will check if the notes belong to the team Marketing. If the team is Marketing the node will true, otherwise false. Mattermost node: This node will send a message about the new data in the channel 'Marketing' in Mattermost. If you have a different channel, use that instead. You can even replace the Mattermost node with nodes of other messaging platforms, like Slack, Telegram, Discord, etc. NoOp node: Adding this node here is optional, as the absence of this node won't make a difference to the functioning of the workflow.
Create custom reasoning patterns for AI agents with GraphRAG & knowledge ontology
Teach your AI agent HOW to think, not WHAT to think [](https://www.youtube.com/watch?v=jhqBb3nuyAY) This workflow demonstrates how you can build an AI agent in n8n that uses the reasoning logic you define. So an LLM learns a way of thinking, which you can then apply to multiple problems: Make an AI chatbot that knows how to convince anybody using the "Getting to Yes" method Build an LLM workflow that uses Ray Dalio's principles to spot investment opportunities Create an AI agent crew of interdisciplinary thinkers: e.g. a specialist in psychology who gives an advice on education programmes. How it works This template uses the n8n AI agent node as an orchestrating agent that has access to a certain reasoning logic defined by an InfraNodus knowledge graph. This graph contains a list of reasoning rules (ontology), which is extracted to provide an advice that is relevant to the original prompt. It uses GraphRAG under the hood to traverse the parts of the graph relevant to the query. This advice and the reasoning logic extracted is then used by the AI agent to generate a response that is relevant to the user's query but that uses the reasoning logic provided through the graph. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, a web form that can be embedded to any website, or a webhook that can be connected to a Telegram / WhatsApp bot) The AI agent node accesses the Reasoning Logic HTTP InfraNodus nodes. The description of AI agent and the description of the reasoning InfraNodus node provides the agent with an understanding of how to rephrase the original question to retrieve relevant reasoning logic. The request is sent to the InfraNodus node. It provides a response that contains the reasoning logic needed to answer the question. This reasoning logic is then sent back to an LLM along with the original query to produce the response. InfraNodus uses GraphRAG under the hood: convert user query into graph find the overlap with the reasoning graph (using n=1 or more hops to include more relations) use similarity search to get additional parts of the graph generate a response based on this intersection as well as the context provided provide information about the underlying structure How to use You need an InfraNodus account to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for the reasoning logic Use the AI ontology creator to generate an ontology for a certain topic or text using AI. Then augment it with your own data. See our help article on creating ontologies for detailed instructions For each graph, go to the workflow, paste the name of the graph into the request JSON body name field. Change the system prompt in the AI agent node to reflect the nature of your reasoning logic. For instance, if it's an expert in interactions, you specify that, if it's a psychology expert, you need to specify that as well. Change the description of the reasoning node (HTTP tool). Use the InfraNodus summary and Project Notes > RAG prompt buttons to generate a description for the reasoning logic, which you can then reuse in your workflow. add the LLM key to the OpenAI node (or to the model of your choice) and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/21429518472988-Using-Knowledge-Graphs-as-Reasoning-Experts Also check out the video tutorial with a demo: [](https://www.youtube.com/watch?v=jhqBb3nuyAY)
Track Amazon prices & monitor competitors with Apify and Google Sheets
Amazon Price Tracker & Competitor Monitoring Workflow (Apify + Google Sheets) This n8n workflow automates Amazon price tracking and competitor monitoring by scraping product pricing via Apify and updating your Google Sheet every day. It removes manual price checks, keeps your pricing data always fresh, and helps Amazon sellers stay ahead in competitive pricing, Buy Box preparation, and daily audits. 💡 Use Cases Automatically track prices of your Amazon products Monitor competitor seller prices across multiple URLs Maintain a daily pricing database for reporting and insights Catch sudden competitor undercutting or pricing changes Support Buy Box analysis by comparing seller prices Scale from 10 to 1000+ product URLs without manual effort 🧠 How It Works Scheduled Trigger runs the workflow every morning Google Sheets node loads all product rows with seller URLs Loop node processes each item one-by-one Apify Actor node triggers the Amazon scraper HTTP Request node fetches the scraped result from Apify JavaScript node extracts, cleans, and formats price data Update Sheet node writes the fresh prices back to the right row Supports additional price columns for more sellers or metrics ➕ Adding New Competitor Columns (Step-by-Step) Add a new column in Google Sheets Add two new columns: competitorurl3 pricecomp3 --- Update the Apify Actor (inside n8n) In the Apify Actor node, pass the new competitor URL: "competitorurl3": {{$json.competitorurl3}} This ensures Apify scrapes the additional competitor product page. --- Update your Code (JavaScript) node Inside the Code node, extract the new competitor’s price from the Apify JSON and attach it to the output: const pricecomp3 = item?.offers?.[2]?.price || null; item.pricecomp3 = pricecomp3; return item; (Adjust the index [2] based on the Apify output structure.) --- Update the Google Sheet “Update Row” node To save the new values into your Sheet: Open your Google Sheets Update Row node Scroll to Field Mapping Map Columns with New Data Hit the "Save & Execute" Button.🚀 ⚡ Requirements Apify account (free tier is enough) Apify "Amazon Product Scraper" API (Costs $40/month - 14-day free trial) Google Sheet containing product URLs Basic credentials setup inside n8n 🙌 Want me to set it up for you? I’ll configure the full automation — Apify scraper, n8n workflow, Sheets mapping, and error handling. Email me at: imarunavadas@gmail.com Automate the boring work and focus on smarter selling. 🚀
Restaurant lead generation from Google Maps with Apify, Airtable & AI newsletter
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🚀 Automating Google Maps Lead Generation with n8n + Apify Finding quality leads can be time-consuming. What if you could scrape restaurant data from Google Maps, filter the best ones, and email a Morning Brew–style newsletter automatically? That’s exactly what this n8n workflow does. 🔎 What This Workflow Does Takes a location input (Bangkok or Bareilly in this case) Runs a Google Maps scraper via Apify Actor Extracts restaurant essentials (name, category, rating, reviews, address, phone, Google Maps link) Sorts & filters results (only high-review, highly-rated places) Saves data to Airtable for lead management Uses AI to generate a newsletter in a Morning Brew–style HTML email Emails the newsletter automatically to your chosen recipients 🛠️ Workflow Breakdown Form Trigger User selects location from a dropdown (Bangkok or Bareilly) Submits form to kickstart the process Google Maps Scraper Powered by Apify Collects up to 1,000 restaurants* with details: Name Category Price Range Rating Reviews Address Phone Google Maps URL Skips closed places and pulls detailed contact data Extract & Transform Data n8n Set node extracts only the essentials Formats them into a clean text block (Restaurant_Data) Sort & Filter Sorted by: Review Count (descending) Rating (descending) Filter: Only restaurants with 500+ reviews* Airtable Lead Storage Each record is saved to Google Map Leads - Restaurants Airtable table Fields include: Title Category Price Range Rating Review Count Address Phone Location AI-Powered Newsletter n8n’s LangChain + OpenAI node generates an HTML newsletter Tone: Breezy, witty, like Morning Brew Content: Sorted restaurant picks with ratings, reviews, and contact links Output is JSON with "Subject" and "Body" Automatic Email Gmail node sends the newsletter directly to your inbox Example recipient: prath002@gmail.com 🎯 Why This Workflow Rocks End-to-End Automation: From scraping → filtering → emailing, no manual effort Lead Enrichment: Only keeps high-quality restaurants with strong social proof Scalable: Works for any city you plug into the form Engaging Output: AI crafts the results into a ready-to-send newsletter 🔮 Next Steps Add more cities to the dropdown for multi-location scraping Customize the email template for branding Integrate with CRM tools for automated outreach 👉 With just a few clicks, you can go from raw Google Maps data → polished newsletter → fresh leads in Airtable. That’s the power of n8n + Apify + AI.
Generate professional emails with custom tones using OpenAI GPT
AI Email Generator with Tone Selection Made by Biznova on Tiktok 📧 What This Does This workflow creates a professional email generator that allows users to: Choose from multiple tones (Professional, Friendly, Formal, Casual) Input recipient details, subject, and context Generate a complete, well-formatted email using AI 👥 Who's It For Business professionals who need to write emails quickly Customer support teams responding to inquiries Sales teams crafting outreach messages Anyone who wants help writing professional emails 🎯 How It Works User fills out a form with email details and selects a tone The workflow processes the input and creates an AI prompt OpenAI generates a complete email based on the tone The formatted email is displayed for the user to copy ⚙️ Setup Requirements OpenAI API key (get one at https://platform.openai.com) n8n instance (cloud or self-hosted) 🚀 How to Use Set up your OpenAI credentials in the "OpenAI Chat Model" node Activate the workflow Share the form URL with users Users fill out the form and receive a generated email instantly 🔧 Setup Steps OpenAI API Key Go to https://platform.openai.com/api-keys Create a new API key Add it to the "OpenAI Chat Model" node credentials Customize Tones (Optional) Edit the "Build AI Prompt" node Modify the tone instructions to match your needs Add new tones to the form dropdown Adjust AI Settings (Optional) In "OpenAI Chat Model" node: Change model (gpt-4 for better quality) Adjust temperature (0.5-0.9) Modify max tokens for longer/shorter emails Test the Workflow Click "Test workflow" button Fill out the form Check the generated email Share the Form Activate the workflow Copy the form URL Share with your team or customers
Automate personalized cold email sequences with GPT-4, Mailgun and Supabase
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. AI-Powered Cold Email Sequence Workflow for n8n Unlock advanced cold email automation and boost your outbound results with this robust n8n workflow, engineered for scale, personalization, and flexibility. Harness AI-driven email content, dynamic lead handling, and intelligent scheduling—without writing code. What’s Inside Intelligent Multi-Step Email Outreach Automate 3-step cold email sequences for every prospect—all fully personalized and contextually adapted through AI research and copy generation. Each contact receives tailored, timely emails designed to maximize engagement and reply rates. Automated Personalization at Scale For every new lead, the workflow: Researches company and role context using AI Identifies key pain points and crafts custom hooks Builds multi-language, well-formatted HTML emails with a consistent, brand-aligned tone This produces authentic, individualized communication—far more effective than generic mail merges. Advanced Scheduling & Delivery Logic Smart scheduling: Sends distributed across optimal days/hours (configurable for your market) Throttled delivery: Drip batching and dynamic waits to preserve deliverability Automated follow-ups: Gentle, contextual nudges at precise intervals if there’s no reply Lead Management & Expansion Seamless database integration: Email history, logic, and lead data fully synchronized with your backend (Supabase support included) Integrated lead generation: Suite includes a companion workflow for sourcing, deduplicating, and enriching leads using Apollo, GPT-4, and AI—feeds directly into your campaign pipeline Built for Reliability and Scale Resilient against errors and duplicate sends Multi-sender rotation for reputation management Easily customizable scheduling, content, languages, and batch size Tracks all critical data fields, like send history and reply status Use Cases B2B Sales Development Automated Candidate Outreach (Recruitment) Newsletter or Event Drip Campaigns Startup Go-to-Market Sequences Agency Lead Generation Template Highlights AI-Powered Personalization: Cold emails crafted by GPT-4 and prompt engineering Omnichannel Scheduling: Dynamic batching, throttling, sender rotation Works Out-of-the-Box: Connects to Mailgun, OpenAI, Supabase—simply insert credentials and leads Companion Lead Gen Workflow: Includes Apollo–AI–database pipeline for continuous sourcing that you can acces for free in my profile. Flexible & Modular: Adapt language, schedule, templates, or trigger events as your strategy evolves Best Practice Features No PII or sensitive data embedded—safe for corporate teams Modular zones for sequence creation, delivery, and tracking—clarity and easy expansion Clearly-named nodes and logical flows, following n8n community standards Robust error handling for high deliverability and low maintenance Experience end-to-end intelligent email automation—powered by n8n, trusted integrations, and state-of-the-art AI. Both the cold outreach workflow and the lead generation template are included. Discover, engage, and convert—at scale.