Write JSON to disk (binary)
The “Write Binary File” expects binary data. The JSON data is, however, JSON ;) There should really be a node that allows moving data around between both of them. For now, it can be done with a Function-Node. At least till a proper one is in place. The first node is example data, wich you can customize or replace. The second node named “Make Binary” is the important one with the custom code which makes the data binary and writes it to the correct location.
Adaptive RAG with Google Gemini & Qdrant: context-aware query answering
Description This workflow automatically classifies user queries and retrieves the most relevant information based on the query type. 🌟 It uses adaptive strategies like; Factual, Analytical, Opinion, and Contextual to deliver more precise and meaningful responses by leveraging n8n's flexibility. Integrated with Qdrant vector store and Google Gemini, it processes each query faster and more effectively. 🚀 How It Works? Query Reception: A user query is triggered (e.g., through a chatbot interface). 💬 Classification: The query is classified into one of four categories: Factual: Queries seeking verifiable information. Analytical: Queries that require in-depth analysis or explanation. Opinion: Queries looking for different perspectives or subjective viewpoints. Contextual: Queries specific to the user or certain contextual conditions. Adaptive Strategy Application: Based on classification, the query is restructured using the relevant strategy for better results. Response Generation: The most relevant documents and context are used to generate a tailored response. 🎯 Set Up Steps Estimated Time: ⏳ 10-15 minutes Prerequisites: You need an n8n account and a Qdrant vector store connection. Steps: Import the n8n workflow: Load the workflow into your n8n instance. Connect Google Gemini and Qdrant: Link these tools for query processing and data retrieval. Connect the Trigger Interface: Integrate with a chatbot or API to trigger the workflow. Customize: Adjust settings based on the query types you want to handle and the output format. 🔧 For more detailed instructions, please check the sticky notes inside the workflow. 📌
Search LinkedIn companies and add them to Airtable CRM
Search LinkedIn companies and add them to Airtable CRM Who is this for? This template is ideal for sales teams, business development professionals, and marketers looking to build a robust prospect database without manual LinkedIn research. Perfect for agencies, consultants, and B2B companies targeting specific business profiles. What problem does this workflow solve? Manually researching companies on LinkedIn and adding them to your CRM is time-consuming and error-prone. This automation eliminates the tedious process of finding, qualifying, and importing prospects into your database. What this workflow does This workflow automatically searches for companies on LinkedIn based on your criteria (keywords, size, location), retrieves detailed information about each company, filters them based on quality indicators (follower count and website availability), and adds new companies to your Airtable CRM while preventing duplicates. Setup Create a Ghost Genius API account and get your API key Configure HTTP Request nodes with Header Auth credentials (Name: "Authorization", Value: "Bearer yourapikey") Create an Airtable base named "CRM" with columns: name, website, LinkedIn, id, etc. Set up your Airtable credentials following n8n documentation Add your company search selection criteria to the “Set Variables” node. How to customize this workflow Modify search parameters in the "Set Variables" node to target different industries, locations, or company sizes Adjust the follower count threshold in the "Filter Valid Companies" node based on your qualification criteria Customize the Airtable fields mapping in the "Add Company to CRM" node to match your database structure Add notification nodes (Slack, Email) to alert you when new companies are added
Auto-generate social media videos with GPT-5 and publish via Blotato
Auto-Generate Social Media Videos with GPT-5 and Publish via Blotato > ⚠️ Disclaimer: This workflow uses Community Nodes (Blotato) and requires a self-hosted n8n instance with "Verified Community Nodes" enabled. --- 👥 Who is this for? This workflow is perfect for: Content creators and influencers who post regularly on social media Marketing teams that want to scale branded video production Solo entrepreneurs looking to automate their video marketing Agencies managing multi-client social media publishing --- 💡 What problem is this workflow solving? Creating high-quality video content and publishing consistently on multiple platforms is time-consuming. You often need to: Write compelling captions and titles Adapt content to fit each platform’s requirements Publish manually or across disconnected tools This workflow automates the entire process — from idea to publishing — so you can focus on growth and creativity, not logistics. --- ⚙️ What this workflow does Receives a video idea via Telegram Saves metadata to Google Sheets Transcribes the video using OpenAI Whisper Generates a catchy title and caption using GPT-5 Uploads the final media to Blotato Publishes the video automatically to: TikTok Instagram YouTube Shorts Facebook X (Twitter) Threads LinkedIn Pinterest Bluesky Updates the post status in Google Sheets Sends confirmation via Telegram --- 🧰 Setup Before launching the workflow, make sure to: Create a Blotato Pro account and generate your API Key Enable Verified Community Nodes in the n8n Admin Panel Install the Blotato community node in n8n Create your Blotato credential using the API key Make a copy of this Google Sheet template Ensure your Google Drive folder with videos is shared publicly (viewable by anyone with the link) Link your Telegram Bot and configure the trigger node Follow the sticky note instructions inside the workflow --- 🛠️ How to customize this workflow Modify the GPT-5 prompt to reflect your brand voice or campaign tone Add/remove social platforms depending on your strategy Include additional AI modules (e.g., for voiceover or thumbnails) Insert review/approval steps (via Slack, email, or Telegram) Connect Airtable, Notion, or your CRM to track results --- This is your all-in-one AI video publishing engine, built for automation, scale, and growth across the social web. --- 📄 Documentation: Notion Guide --- Need help customizing? Contact me for consulting and support : Linkedin / Youtube
Generate 🤖🧠 AI-powered video 🎥 from image and upload it on Google Drive
This workflow automates the process of generating AI-powered video from image and then generates a video that is uploaded to Google Drive. This workflow is a powerful tool for automating the creation of AI-generated videos from images, saving time and ensuring a seamless process from input to final output. Below is a breakdown of the workflow: --- How It Works The workflow is designed to create videos from images using AI and manage the generated content. Here's how it works: Form Submission: The workflow starts with a Form Trigger node, where users submit a form with the following fields: Description: The text description for the video. Duration (in seconds): The length of the video. Aspect Ratio: The aspect ratio of the video (e.g., 16:9, 9:16, 1:1). Image URL: The URL of the image to be used for video generation. Set Data: The Set Data node organizes the form inputs into a structured format for further processing. Create Video: The Create Video node sends a POST request to generate the video. The request includes the description, image URL, duration, and aspect ratio. Wait and Check Status: The Wait 60 sec. node pauses the workflow for 60 seconds to allow the video generation process to complete. The Get Status node checks the status of the video generation by querying the API. Completed?: The Completed? node checks if the video generation is complete. If not, the workflow loops back to wait and check again. Retrieve and Upload Video: Once the video is generated, the Get Url Video node retrieves the video URL. The Get File Video node downloads the video file. The Upload Video node uploads the video to a specified folder in Google Drive. Watch the resulting video --- Set Up Steps To set up and use this workflow in n8n, follow these steps: API Key: Create an account on account and obtain your API Key. In the Create Video node, set up HTTP Header Authentication: Name: Authorization Value: Key YOURAPIKEY Google Drive Integration: Set up Google Drive credentials in n8n for the Upload Video node. Specify the folder ID in Google Drive where the generated videos will be uploaded. Form Configuration: The Form Trigger node is pre-configured with fields for: Description: The text description for the video. Duration (in seconds): The length of the video. Aspect Ratio: Choose between 16:9, 9:16, or 1:1. Image URL: The URL of the image to be used for video generation. Customize the form fields if needed. Test the Workflow: Submit the form with the required details (description, duration, aspect ratio, and image URL). The workflow will: Generate the video using the API. Check the status until the video is ready. Upload the video to Google Drive. Optional Customization: Modify the workflow to include additional features, such as: Adding more aspect ratio options. Sending notifications when the video is ready. Integrating with other storage services (e.g., Dropbox, AWS S3).
Manage users using the G Suite Admin node
No description available.
Telegram user registration workflow
Telegram User Registration module Offers an efficient way to manage new and returning users in your Telegram bot workflow. It checks user data against a Google Sheets database, saving essential user information and ensuring personalized interactions. Key Features: User Lookup: Searches for users in a Google Sheets database based on their unique Telegram ID. New User Handling: Automatically registers new users, capturing details such as first name, last name, username, and language code. Returning User Recognition: Detects when an existing user returns and updates their status. Data Storage: Safely stores user information in a structured format, with fields for status and date of registration. Personalized Greetings: Delivers customized welcome messages for both new and returning users, promoting engagement. Setup Instructions: Copy this Telegram User Registration Workflow Follow the instructions inside: - Use Google Sheet template - Set up your credentials - Use example data to test workflow Customization: Adjust the status and messages for users based on your bot's needs. Connecting to the bot Workflow: Copy my Telegram bot starter template Copy Workflow ID of your Telegram User Registration Workflow Find the Register module in Telegram bot starter template and paste your Workflow ID Now the user's data is entered into the Register Workflow. This module provides a scalable foundation for managing user registration, whether your bot is for meal planning, customer support, or other interactive services. My easy to set up Telegram bot modules: 🏁 Telegram bot starter template 📝 Telegram callback menu (soon) Please reach out to Victor if you need further assistance with you n8n workflows and automations!
Create a company policy chatbot with RAG, Pinecone vector database, and OpenAI
A RAG Chatbot with n8n and Pinecone Vector Database Retrieval-Augmented Generation (RAG) allows Large Language Models (LLMs) to provide context-aware answers by retrieving information from an external vector database. In this post, we’ll walk through a complete n8n workflow that builds a chatbot capable of answering company policy questions using Pinecone Vector Database and OpenAI models. Our setup has two main parts: Data Loading to RAG – documents (company policies) are ingested from Google Drive, processed, embedded, and stored in Pinecone. Data Retrieval using RAG – user queries are routed through an AI Agent that uses Pinecone to retrieve relevant information and generate precise answers. --- Data Loading to RAG This workflow section handles document ingestion. Whenever a new policy file is uploaded to Google Drive, it is automatically processed and indexed in Pinecone. Nodes involved: Google Drive Trigger Watches a specific folder in Google Drive. Any new or updated file triggers the workflow. Google Drive (Download) Fetches the file (e.g., a PDF policy document) from Google Drive for processing. Recursive Character Text Splitter Splits long documents into smaller chunks (with a defined overlap). This ensures embeddings remain context-rich and retrieval works effectively. Default Data Loader Reads the binary document (PDF in this setup) and extracts the text. OpenAI Embeddings Generates high-dimensional vector representations of each text chunk using OpenAI’s embedding models. Pinecone Vector Store (Insert Mode) Stores the embeddings into a Pinecone index (n8ntest), under a chosen namespace. This step makes the policy data searchable by semantic similarity. 👉 Example flow: When HR uploads a new Work From Home Policy PDF to Google Drive, it is automatically split, embedded, and indexed in Pinecone. --- Data Retrieval using RAG Once documents are loaded into Pinecone, the chatbot is ready to handle user queries. This section of the workflow connects the chat interface, AI Agent, and retrieval pipeline. Nodes involved: When Chat Message Received Acts as the webhook entry point when a user sends a question to the chatbot. AI Agent The core reasoning engine. It is configured with a system message instructing it to only use Pinecone-backed knowledge when answering. Simple Memory Keeps track of the conversation context, so the bot can handle multi-turn queries. Vector Store QnA Tool Queries Pinecone for the most relevant chunks related to the user’s question. In this workflow, it is configured to fetch company policy documents. Pinecone Vector Store (Query Mode) Acts as the connection to Pinecone, fetching embeddings that best match the query. OpenAI Chat Model Refines the retrieved chunks into a natural and concise answer. The model ensures answers remain grounded in the source material. Calculator Tool Optional helper if the query involves numerical reasoning (e.g., leave calculations or benefit amounts). 👉 Example flow: A user asks “How many work-from-home days are allowed per month?”. The AI Agent queries Pinecone through the Vector Store QnA tool, retrieves the relevant section of the HR policy, and returns a concise answer grounded in the actual document. --- Wrapping Up By combining n8n automation, Pinecone for vector storage, and OpenAI for embeddings + LLM reasoning, we’ve created a self-updating RAG chatbot. Data Loading pipeline ensures that every new company policy document uploaded to Google Drive is immediately available for semantic search. Data Retrieval pipeline allows employees to ask natural language questions and get document-backed answers. This setup can easily be adapted for other domains — compliance manuals, tax regulations, legal contracts, or even product documentation.
Automate cold outreach with Apollo, LinkedIn & Gmail using GPT-4
Cold Outreach Automation by Rysysth Technologies This n8n workflow automates the complete cold outreach process by combining Apollo.io lead generation, LinkedIn networking, and personalized email outreach into one streamlined system. --- How It Works Prospect Definition (Form Input) User enters job titles, company size, keywords, and locations. Apollo Search URL Generation OpenAI converts form inputs into a precise Apollo.io search URL. Lead Scraping (Apify) Apollo.io scraper collects contact details, emails, LinkedIn profiles, and company data (limited to 10 leads per run). LinkedIn & Company Data Enrichment (Unipile) Extracts LinkedIn profile and company details for each lead. CRM Sync (HubSpot) Automatically creates or updates lead records in HubSpot CRM. Personalized Outreach (AI-Powered) OpenAI generates: Custom LinkedIn connection request (under 300 characters) Email subject and body (personalized with a soft CTA) LinkedIn Connect If not already connected, workflow sends LinkedIn invites via Unipile. Email Validation (ZeroBounce) Ensures emails are valid or catch-all before outreach. Email Outreach (Gmail API) If verified, sends the AI-personalized outreach email directly from Gmail. --- Tools and APIs Integrated Apify → Apollo.io scraper (lead extraction) Unipile → LinkedIn profile enrichment and connection requests ZeroBounce → Email verification OpenAI → Apollo URL creation and outreach copy generation HubSpot → CRM sync Gmail → Automated outreach emails --- Key Benefits Saves time by automating manual prospecting and email writing Delivers personalized, multi-channel outreach at scale Ensures accurate CRM updates with HubSpot integration Improves email deliverability with ZeroBounce validation Designed for founders, sales teams, and agencies seeking efficient lead generation --- Connect with Rysysth Technologies At Rysysth Technologies, we build custom n8n workflows that go far beyond standard templates. From AI-powered prospecting to CRM automation and advanced outreach pipelines, we tailor automation solutions that align perfectly with your business goals. Let’s create your custom workflow together. Partner with Rysysth Technologies to transform your outreach process today. Website: rysysthtechnologies.com Email: getstarted@rysysthtechnologies.com LinkedIn: linkedin.com/company/rysysth
Build a ServiceNow knowledge chatbot with OpenAI and Qdrant RAG
Data Ingestion Workflow (Left Panel – Pink Section) This part collects data from the ServiceNow Knowledge Article table, processes it into embeddings, and stores it in Qdrant. Steps: Trigger: When clicking ‘Execute workflow’ The workflow starts manually when you click Execute workflow* in n8n. Get Many Table Records Fetches multiple records from the ServiceNow Knowledge Article table. Each record typically contains knowledge article content that needs to be indexed. Default Data Loader Takes the fetched data and structures it into a format suitable for text splitting and embedding generation. Recursive Character Text Splitter Splits large text (e.g., long knowledge articles) into smaller, manageable chunks for embeddings. This step ensures that each text chunk can be properly processed by the embedding model. Embeddings OpenAI Uses OpenAI’s Embeddings API to convert each text chunk into a high-dimensional vector representation. These embeddings are essential for semantic search in the vector database. Qdrant Vector Store Stores the generated embeddings along with metadata (e.g., article ID, title) in the Qdrant vector database. This database will later be used for similarity searches during chatbot interactions. --- RAG Chatbot Workflow (Right Panel – Green Section) This section powers the Retrieval-Augmented Generation (RAG) chatbot that retrieves relevant information from Qdrant and responds intelligently. Steps: Trigger: When chat message received Starts when a user sends a chat message to the system. AI Agent Acts as the orchestrator, combining memory, tools, and LLM reasoning. Connects to the OpenAI Chat Model and Qdrant Vector Store. OpenAI Chat Model Processes user messages and generates responses, enriched with context retrieved from Qdrant. Simple Memory Stores conversational history or context to ensure continuity in multi-turn conversations. Qdrant Vector Store1 Performs a similarity search on stored embeddings using the user’s query. Retrieves the most relevant knowledge article chunks for the chatbot. Embeddings OpenAI Converts user query into embeddings for vector search in Qdrant.
Analyze YouTube videos for viral content with engagement scoring and Google Sheets
🚀 Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics — all in one streamlined process. ✨ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works 🎯 1️⃣ Input Your Search Term — Simply enter any keyword or topic you want to research. 2️⃣ Select Video Format — Choose between short, medium, or long videos. 3️⃣ Choose Number of Videos — Define how many videos to analyze in bulk. 4️⃣ Automatic Data Fetch — The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5️⃣ Performance Scoring — Videos are scored based on engagement rates with easy-to-understand labels like 🚀 HOLY HELL (viral) or 💀 Dead. 6️⃣ Export to Google Sheets — All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions 🛠️ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features ✨ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights 🔥🎬. Organizes data into Google Sheets with thumbnail previews 🖼️. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! 🌟 Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro 🎥💡
Catch MailChimp subscribe events
Companion workflow for Mailchimp Trigger node docs