5 templates found
Category:
Author:
Sort:

Bidirectional GitHub workflow sync & version control for n8n workflows

Who is this for? This template is ideal for n8n administrators, automation engineers, and DevOps teams who want to maintain bidirectional synchronization between their n8n workflows and GitHub repositories. It helps teams keep their workflow backups up-to-date and ensures consistency between their n8n instance and version control system. What problem is this workflow solving? Managing workflow versions across n8n and GitHub can become complex when changes happen in both places. This workflow solves that by automatically synchronizing workflows bidirectionally, ensuring that the most recent version is always available in both systems without manual intervention or version conflicts. What this workflow does: Runs on a weekly schedule (every Monday) to check for synchronization needs. Fetches all workflows from your n8n instance and compares them with GitHub repository files. Identifies workflows that exist only in n8n and uploads them to GitHub as JSON backups. Identifies workflows that exist only in GitHub and creates them in your n8n instance. For workflows that exist in both places, compares timestamps and syncs the most recent version: If n8n version is newer → Updates GitHub with the latest workflow If GitHub version is newer → Updates n8n with the latest workflow Automatically handles file naming, encoding/decoding, and commit messages with timestamps. Setup: Connect GitHub: Configure GitHub API credentials in the GitHub nodes. Note: Use a GitHub Personal Access Token (classic) with repo permissions to read and write workflow files. Connect n8n API: Provide your n8n API credentials in the n8n nodes. Check this doc Configure GitHub Details in the Set GitHub Details node: githubaccountname: Your GitHub username or organization githubreponame: The repository name where workflows should be stored repoworkflowspath: The folder path in your repo (e.g., workflows or n8n-workflows) Adjust Schedule: Modify the Schedule Trigger if you want a different sync frequency (currently set to weekly on Mondays). Test the workflow: Run it manually first to ensure all connections and permissions are working correctly. How to customize this workflow to your needs: Change sync frequency: Modify the Schedule Trigger to run daily, hourly, or on-demand. Add filtering: Extend the Filter node to exclude certain workflows (e.g., test workflows, templates). Add notifications: Insert Slack, email, or webhook notifications to report sync results. Implement conflict resolution: Add custom logic for handling workflows with the same timestamp. Add workflow validation: Include checks to validate workflow JSON before syncing. Branch management: Modify to sync to different branches or create pull requests instead of direct commits. Backup retention: Add logic to maintain multiple versions or archive old workflows. Key Features: Bidirectional sync: Handles changes from both n8n and GitHub Timestamp-based conflict resolution: Always keeps the most recent version Automatic file naming: Converts workflow names to valid filenames Base64 encoding/decoding: Properly handles JSON workflow data Comprehensive comparison: Uses dataset comparison to identify differences Automated commits: Includes timestamps in commit messages for traceability This automated synchronization workflow provides a robust backup and version control solution for n8n workflows, ensuring your automation assets are always safely stored and consistently available across environments.

Wikus BerghBy Wikus Bergh
1408

Generate SEO-optimized blog content with Gemini, Scrapeless and Pinecone RAG

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This advanced automation builds a fully autonomous SEO blog writer using n8n, Scrapeless, LLMs, and Pinecone vector database. It’s powered by a Retrieval-Augmented Generation (RAG) system that collects high-performing blog content, stores it in a vector store, and then generates new blog posts based on that knowledge—endlessly. Part 1: Build a Knowledge Base from Popular Blogs Scrape existing articles from a well-established writer (in this case, Mark Manson) using the Scrapeless node. Extract content from blog pages and store it in Pinecone, a powerful vector database that supports similarity search. Use Gemini Embedding 001 or any other supported embedding model to encode blog content into vectors. Result: You’ll have a searchable vector store of expert-level content, ready to be used for content generation and intelligent search. Part 2: SERP Analysis & AI Blog Generation Use Scrapeless' SERP node to fetch search results based on your keyword and search intent. Send the results to an LLM (like Gemini, OpenRouter, or OpenAI) to generate a keyword analysis report in Markdown → then converted to HTML. Extract long-tail keywords, search intent insights, and content angles from this report. Feed everything into another LLM with access to your Pinecone-stored knowledge base, and generate a fully SEO-optimized blog post. Set up steps Prerequisites Scrapeless API key Pinecone account and index setup An embedding model (Gemini, OpenAI, etc.) n8n instance with Community Node: n8n-nodes-scrapeless installed Credential Configuration Add your Scrapeless and Pinecone credentials in n8n under the "Credentials" tab Choose embedding dimensions according to the model you use (e.g., 768 for Gemini Embedding 001) Key Highlights Clones a real content creator: Replicates knowledge and writing style from top-performing blog authors. Auto-scrapes hundreds of blog posts without being blocked. Stores expert content in a vector DB to build a reusable knowledge base. Performs real-time SERP analysis using Scrapeless to fetch and analyze search data. Generates SEO blog drafts using RAG with detailed keyword intelligence. Output includes: blog title, HTML summary report, long-tail keywords, and AI-written article body. RAG + SEO: The Future of Content Creation This template combines: AI reasoning from large language models Reliable data scraping from Scrapeless Scalable storage via Pinecone vector DB Flexible orchestration using n8n nodes This is not just an automation—it’s a full-stack SEO content machine that enables you to: Build a domain-specific knowledge base Run intelligent keyword research Generate traffic-ready content on autopilot 💡 Use Cases SaaS content teams cloning competitor success Affiliate marketers scaling high-traffic blog production Agencies offering automated SEO content services AI researchers building personal knowledge bots Writers automating first-draft generation with real-world tone

scrapeless officialBy scrapeless official
1228

Close tickets faster 🛠️ Zammad tool MCP server 💪 all 20 operations

Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing all Zammad Tool operations to AI agents. Zero configuration needed - all 20 operations pre-built. ⚡ Quick Setup Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Zammad Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Zammad Tool tool with full error handling 📋 Available Operations (20 total) Every possible Zammad Tool operation is included: 🔧 Group (5 operations) • Create a group • Delete a group • Get a group • Get many groups • Update a group 🏢 Organization (5 operations) • Create an organization • Delete an organization • Get an organization • Get many organizations • Update an organization 🔧 Ticket (4 operations) • Create a ticket • Delete a ticket • Get a ticket • Get many tickets 👤 User (6 operations) • Create a user • Delete a user • Get a user • Get many users • Get currently logged-in user • Update a user 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Zammad Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Zammad Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.

David AshbyBy David Ashby
417

Automate Singapore COE price analysis & purchase timing with GLM-4.5 AI predictions

Introduction Automates Singapore COE price tracking, predicts trends using AI, and recommends optimal car purchase timing. Scrapes LTA data biweekly, analyzes historical trends, forecasts next 6 bidding rounds, and sends alerts when buying windows appear—saving time and identifying cost-saving opportunities. How it Works Biweekly trigger scrapes LTA COE data → processes historical trends → AI predicts 6-month prices → compares current vs forecast → generates buy/wait recommendations → alerts sent via Gmail or Telegram. Setup Steps Add NVIDIA/OpenAI API credentials in n8n Connect Google Sheets for data storage Authenticate Gmail/Telegram for notifications Schedule trigger for Wednesdays 8PM SGT Configure alert thresholds in conditional nodes Workflow Schedule Trigger → HTTP Request (Scrape LTA) → Data Processing → Google Sheets (Store) → AI Prediction → Analysis Engine → Conditional Logic → Gmail/Telegram Notification Workflow Steps Scraping: Extract COE prices from OneMotoring Processing: Calculate moving averages, volatility, seasonal trends Storage: Save to Google Sheets with timestamps Prediction: AI forecasts next 6 bidding rounds Analysis: Compare current vs predicted prices, generate recommendation Notification: Alerts via email/Telegram Prerequisites NVIDIA/OpenAI API key, Google account (Sheets), Gmail/Telegram for notifications, basic COE category knowledge Use Cases First-time buyers monitoring price dips, fleet managers timing bulk purchases Customization Add economic indicators, integrate car loan calculators, track parallel imported car prices Benefits Saves hours of manual monitoring, captures 10–15% price dips, provides data-driven purchase timing (potential $5K–$15K savings)

Cheng Siong ChinBy Cheng Siong Chin
120

Digitize business cards to Notion database with Gemini Vision OCR

🧩 Summary Easily digitize and organize your business cards! This workflow allows you to upload a business card image, automatically extract contact information using Google Gemini’s OCR & vision model, and save the structured data into a Notion database — no manual typing required. Perfect for teams or individuals who want to centralize client contact info in Notion after networking events or meetings. --- ⚙️ How it works Form Submission Upload a business card image (.jpg, .png, or .jpeg) through an n8n form. Optionally select a category (e.g., Partner, Client, Vendor). AI-Powered OCR (Google Gemini) The uploaded image is sent to Google Gemini Vision for intelligent text recognition and entity extraction. Gemini returns structured text data such as: json { "Name": "Jung Hyun Park", "Position": "Head of Development", "Phone": "021231234", "Mobile": "0101231234", "Email": "abc@dc.com", "Company": "TOV", "Address": "6F, Donga Building, 212, Yeoksam-ro, Gangnam-gu, Seoul", "Website": "www.tov.com" } JSON Parsing & Cleanup The text response from Gemini is cleaned and parsed into a valid JSON object using a Code node. Save to Notion The parsed data is automatically inserted into your Notion database (Customer Business Cards). Fields such as Name, Email, Phone, Address, and Company are mapped to Notion properties. --- 🧠 Used Nodes Form Trigger – Captures uploaded business card and category input Google Gemini (Vision) – Extracts contact details from the image Code – Parses Gemini’s output into structured JSON Notion – Saves extracted contact info to your Notion database --- 📦 Integrations | Service | Purpose | Node Type | |----------|----------|-----------| | Google Gemini (PaLM) | Image-to-text extraction (OCR + structured entity parsing) | @n8n/n8n-nodes-langchain.googleGemini | | Notion | Contact data storage | n8n-nodes-base.notion | --- 🧰 Requirements A connected Google Gemini (PaLM) API credential A Notion integration with edit access to your database --- 🚀 Example Use Cases Digitize stacks of collected business cards after a conference Auto-save new partner contacts to your CRM database in Notion Build a searchable Notion-based contact directory Combine with Notion filters or rollups to manage client relationships --- 💡 Tips You can easily extend this workflow by adding an email notification node to confirm successful uploads. For multilingual cards, Gemini Vision handles mixed-language text recognition well. Adjust Gemini model (gemini-1.5-flash or gemini-1.5-pro) based on your accuracy vs. speed needs. --- 🧾 Template Metadata | Field | Value | |-------|--------| | Category | AI + Notion + OCR | | Difficulty | Beginner–Intermediate | | Trigger Type | Form Submission | | Use Case | Automate business card digitization | | Works with | Google Gemini, Notion |

JinParkBy JinPark
91
All templates loaded