Auto workflow backup to Google Drive – automated export of all your workflows
n8n Workflow Backup to Google Drive – Automated Export of All Your Workflows This workflow is designed to automatically create backups of all your workflows in n8n and store them as individual .json files in Google Drive. It's a fully automated system that helps developers, agencies, or automation teams ensure their automation logic is always safe, versioned, and ready to restore or share. What is this for? If you’re building and managing multiple automations inside n8n, losing a workflow due to accidental deletion or misconfiguration can cost you hours of work. This template solves that by exporting all your workflows into separate files and storing them in a dated Google Drive folder. It helps with disaster recovery, version tracking, and team collaboration — without any manual exporting. How this works: -Once triggered (manually or via a schedule), the workflow performs the following steps: -Creates a new folder in your Google Drive, named with today’s date (e.g. “Workflow Backups Monday 16-05-2025”). -Connects to your n8n instance using the internal API and retrieves a list of all existing workflows. -Iterates over each workflow, converts it into a .json file using the built-in file conversion node. -Uploads each individual .json file to the newly created folder in Google Drive. -Optionally, the workflow finds and deletes old backup folders to keep your Google Drive clean and avoid clutter. You get a clean, timestamped folder with all your flows — ready to restore, send, or store securely. You can trigger it manually or schedule it (e.g., to run weekly on Monday mornings). How to set it up: Import the provided workflow JSON into your n8n instance. Set up your credentials: -Replace the placeholder “Google demo” with your actual Google Drive OAuth2 credentials in all Google Drive nodes. -Replace the placeholder “n8n demo” with your n8n API credentials so the workflow can fetch your flows. -Go to the node “Create new folder” and replace the folder ID with your own destination folder in Google Drive where backups should be stored. -(Optional) Enable the “Schedule Trigger” to run the backup automatically once a week or on your preferred interval. You’re ready to go — test it with the Manual Trigger first and check your Google Drive for results.
Convert PostgreSQL table to CSV
Convert PostgreSQL table to CSV CSV is a super useful and universal way to transfer data between different tools. This workflow gives an example of how to take data from PostgreSQL and convert it easily into a CSV. What you need Before running the workflow, please make sure you have access to a remote PostgreSQL server and have table data: booktitle,bookauthor,read_date Demons,Fyodor Dostoyevsky,2022-09-08 Ulysses,James Joyce,2022-05-06 Catch-22,Joseph Heller,2023-01-04 The Bell Jar,Sylvia Plath,2023-01-21 Frankenstein,Mary Shelley,2023-02-14 How it works Trigger the workflow on click Declare the name of the Excel file and sheet names Remotely connect to the PostgreSQL database and specify query execution Write the query data to CSV The detailed process is explained further in the tutorial: https://blog.n8n.io/postgres-export-to-csv/
YouTube advanced RSS generator with Telegram formation
[](https://youtu.be/EtzJmrmCiUY) Overview The [n8n] YouTube Channel Advanced RSS Feeds Generator workflow facilitates the generation of various RSS feed formats for YouTube channels without requiring API access or administrative permissions. It utilizes third-party services to extract data, making it extremely user-friendly and accessible. Key Use Cases and Benefits Content Aggregation: Easily gather and syndicate content from any public YouTube channel. No API Key Required: Avoid the complexities and limitations of Google's API. Multiple Formats: Supports ATOM, JSON, MRSS, Plaintext, Sfeed, and direct YouTube XML feeds. Flexibility: Input can be a YouTube channel or video URL, ID, or username. Services/APIs Utilized This workflow integrates with: commentpicker.com: For retrieving YouTube channel IDs. rss-bridge.org: To generate various RSS formats. Configuration Instructions Start the Workflow: Activate the workflow in your n8n instance. Input Details: Enter the YouTube channel or video URL, ID, or username via the provided form trigger. Run the Workflow: Execute the workflow to receive links to 13 different RSS feeds, including community and video content feeds. Screenshots Additional Notes Customization: You can modify the RSS feed formats or integrate additional services as needed. Support and Contributions For support, questions, or contributions, please visit the n8n community forum or the GitHub repository. We welcome contributions from the community!
Build persistent chat memory with GPT-4o-mini and Qdrant vector database
🧠 Long-Term Memory System for AI Agents with Vector Database Transform your AI assistants into intelligent agents with persistent memory capabilities. This production-ready workflow implements a sophisticated long-term memory system using vector databases, enabling AI agents to remember conversations, user preferences, and contextual information across unlimited sessions. 🎯 What This Template Does This workflow creates an AI assistant that never forgets. Unlike traditional chatbots that lose context after each session, this implementation uses vector database technology to store and retrieve conversation history semantically, providing truly persistent memory for your AI agents. 🔑 Key Features Persistent Context Storage: Automatically stores all conversations in a vector database for permanent retrieval Semantic Memory Search: Uses advanced embedding models to find relevant past interactions based on meaning, not just keywords Intelligent Reranking: Employs Cohere's reranking model to ensure the most relevant memories are used for context Structured Data Management: Formats and stores conversations with metadata for optimal retrieval Scalable Architecture: Handles unlimited conversations and users with consistent performance No Context Window Limitations: Effectively bypasses LLM token limits through intelligent retrieval 💡 Use Cases Customer Support Bots: Remember customer history, preferences, and previous issues Personal AI Assistants: Maintain user preferences and conversation continuity over months or years Knowledge Management Systems: Build accumulated knowledge bases from user interactions Educational Tutors: Track student progress and adapt teaching based on history Enterprise Chatbots: Maintain context across departments and long-term projects 🛠️ How It Works User Input: Receives messages through n8n's chat interface Memory Retrieval: Searches vector database for relevant past conversations Context Integration: AI agent uses retrieved memories to generate contextual responses Response Generation: Creates informed responses based on historical context Memory Storage: Stores new conversation data for future retrieval 📋 Requirements OpenAI API Key: For embeddings and chat completions Qdrant Instance: Cloud or self-hosted vector database Cohere API Key: Optional, for enhanced retrieval accuracy n8n Instance: Version 1.0+ with LangChain nodes 🚀 Quick Setup Import this workflow into your n8n instance Configure credentials for OpenAI, Qdrant, and Cohere Create a Qdrant collection named 'ltm' with 1024 dimensions Activate the workflow and start chatting! 📊 Performance Metrics Response Time: 2-3 seconds average Memory Recall Accuracy: 95%+ Token Usage: 50-70% reduction compared to full context inclusion Scalability: Tested with 100k+ stored conversations 💰 Cost Optimization Uses GPT-4o-mini for optimal cost/performance balance Implements efficient chunking strategies to minimize embedding costs Reranking can be disabled to save on Cohere API costs Average cost: ~$0.01 per conversation 📖 Learn More For a detailed explanation of the architecture and implementation details, check out the comprehensive guide: Long-Term Memory for LLMs using Vector Store - A Practical Approach with n8n and Qdrant 🤝 Support Documentation: Full setup guide in the article above Community: Share your experiences and get help in n8n community forums Issues: Report bugs or request features on the workflow page --- Tags: AI LangChain VectorDatabase LongTermMemory RAG OpenAI Qdrant ChatBot MemorySystem ArtificialIntelligence
Send a private message on Zulip
No description available.
Daily news digest & weekly trends with AI filtering, Slack & Google Sheets
Who is this for This template is perfect for: Market Researchers tracking industry trends. Tech Teams wanting to stay updated on specific technologies (e.g., "AI", "Cybersecurity"). Content Creators looking for curated news topics. Busy Professionals who need a high-signal, low-noise news digest. What it does Fetches News: Pulls daily articles via NewsAPI based on your chosen keyword (default: "technology"). AI Filtering: Uses an AI Agent (via OpenRouter) to filter out low-quality or irrelevant clickbait. Daily Digest (Slack): Summarizes the top 3 articles in English. Translates the summaries to Japanese using DeepL (optional). Posts both versions to a Slack channel. Data Archiving (Sheets): Extracts structured data (Title, Author, Summary, URL) and saves it to Google Sheets. Weekly Trend Report: Every Monday, it reads the past week's data from Google Sheets and uses AI to generate a high-level trend report and strategic insights. How to set up Configure Credentials: You will need API keys/auth for NewsAPI, OpenRouter (or OpenAI), DeepL, Google Sheets, and Slack. Setup Google Sheet: Create a sheet with the following headers in the first row: title, author, summary, url. Map the Sheet: In the "Append row in sheet" and "Read sheet (weekly)" nodes, select your file and map the columns. Define Keyword: Open the "Set Keyword" node and change chatInput to the topic you want to track (e.g., "Crypto", "SaaS", "Climate Change"). Slack Setup: Select your desired channel in the Slack nodes. Requirements n8n (Self-hosted or Cloud) NewsAPI Key (Free tier available) OpenRouter (or any LangChain compatible Chat Model like OpenAI) DeepL API Key (for translation) Google Sheets account Slack Workspace How to customize Change the Language: Remove the DeepL node if you only want English, or change the target language code. Adjust the Prompt: Modify the "AI Agent (Filter)" system message to change how strict the news filtering is. Change Schedule: Adjust the Cron nodes to run at your preferred time (currently set to Daily 8 AM and Weekly Monday 9 AM).
Bidirectional ClickUp task & Google Calendar sync with multi-calendar routing
Who’s it for Teams that manage tasks in ClickUp and want those tasks reflected—and kept in sync—in Google Calendar automatically. How it works A ClickUp Trigger captures task events (create, update, delete). For new tasks, the workflow creates a Google Calendar event with the correct start/end. It stores a mapping between clickupTaskId and calendarEventId in a Google Sheet so later updates and deletions can target the right event. Multiple lanes (personal/school/tech/internship) let you route tasks to different calendars. How to set up Assign ClickUp OAuth, Google Calendar, and Google Sheets credentials to the nodes. Open the Configuration node and fill: calendarId_* for each lane sheetId and sheetTabName for the mapping sheet (optional) clickupTeamId Enable the ClickUp Trigger and run a manual test to validate mapping creation and event syncing. Requirements ClickUp workspace with OAuth permissions Google Calendar & Sheets access A Google Sheet for the event↔task mapping How to customize the workflow Edit the calendar routing in Edit Fields nodes or point them to different calendarId_* values. Adjust event colors/fields in Google Calendar nodes. Extend the mapping sheet with extra columns (e.g., status, labels) as needed.
Automated resume tailoring with Telegram Bot, LinkedIn & OpenRouter AI
This n8n workflow lets you effortlessly tailor your resume for any job using Telegram and LinkedIn. Simply send a LinkedIn job URL or paste a job description to the Telegram bot, and the workflow will: Extract the job information (using optional proxy if needed) Fetch your resume in JSON Resume format (hosted on GitHub Gist or elsewhere) Use an OpenRouter-powered LLM agent to automatically adapt your resume to match the job requirements Generate both HTML and PDF versions of your tailored resume Return the PDF file and shareable download links directly in Telegram The workflow is open-source and designed with privacy in mind. You can host the backend yourself to keep your data entirely under your control. It requires a Telegram Bot, a public JSON Resume, and an OpenRouter account. Proxy support is available for LinkedIn scraping. Perfect for anyone looking to quickly customize their resume for multiple roles with minimal manual effort!