Back to Catalog
Stephan Koning

Stephan Koning

Account Executive by day , Noco builder for fun at night and always a proud dad of Togo the Samoyed.

Total Views2,760
Templates7

Templates by Stephan Koning

Email parser for RAG agent powered by Gmail and Mem0

This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Alternatively, you can delete the community node and use the HTTP node instead. Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. --- How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. --- Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.

Stephan KoningBy Stephan Koning
648

Automate meeting intelligence with VEXA, OpenAI & Mem0 for conversation insights

VEXA: AI-Powered Meeting Intelligence I'll be honest, I built this because I was getting lazy in meetings and missing key details. I started with a simple VEXA integration for transcripts, then added AI to pull out summaries and tasks. But that just solved part of the problem. The real breakthrough came when we integrated Mem0, creating a persistent memory of every conversation. Now, you can stop taking notes and actually focus on the person you're talking to, knowing a system is tracking everything that matters. This is the playbook for how we built it. How It Works This isn't just one workflow; it's a two-part system designed to manage the entire meeting lifecycle from start to finish. Bot Management: It starts when you flick a switch in your CRM (Baserow). A command deploys or removes an AI bot from Google Meet. No fluff—it's there when you need it, gone when you don't. The workflow uses a quick "digital sticky note" in Redis to remember who the meeting is with and instantly updates the status in your Baserow table. AI Analysis & Memory: Once the meeting ends, VEXA sends the transcript over. Using the client ID (thank god for redis) , we feed the conversation to an AI model (OpenAI). It doesn't just summarize; it extracts actionable next steps and potential risks. All this structured data is then logged into a memory layer (Mem0), creating a permanent, searchable record of every client conversation. Setup Steps: Your Action Plan This is designed for rapid deployment. Here's what you do: Register Webhook: Run the manual trigger in the workflow once. This sends your n8n webhook URL to VEXA, telling it where to dump transcripts after a call. Connect Your CRM: Copy the vexa-start webhook URL from n8n. Paste it into your Baserow automation so it triggers when you set the "Send Bot" field to Start_Bot. Integrate Your Tools: Plug your VEXA, Mem0, Redis, and OpenAI API credentials into n8n. Use the Baserow Template: I've created a free Baserow template to act as your control panel. Grab it here: https://baserow.io/public/grid/t5kYjovKEHjNix2-6Rijk99y4SDeyQY4rmQISciC14w. It has all the fields you need to command the bot. Requirements An active n8n instance or cloud account. Accounts for VEXA.ai, Mem0.ai, Baserow, and OpenAI. A Redis database . Your Baserow table must have these fields: Meeting Link, Bot Name, Send Bot, and Status. Next Steps: Getting More ROI This workflow is the foundation. The real value comes from what you build on top of it. Automate Follow-ups: Use the AI-identified next steps to automatically trigger follow-up emails or create tasks in your project management tool. Create a Unified Client Memory: Connect your email and other communication platforms. Use Mem0 to parse and store every engagement, building a complete, holistic view of every client relationship. Build a Headless CRM: Combine these workflows to build a fully AI-powered system that handles everything from lead capture to client management without any manual data entry. Copy the workflow and stop taking notes

Stephan KoningBy Stephan Koning
470

WhatsApp outbound messaging with Baserow & WasenderAPI

Master Outbound WhatsApp: Baserow & WasenderAPI This workflow integrates with your Baserow 'Messages' table, triggering on 'Sent' status. Messages fire via WasenderAPI, rigorously logged as 'Outbound' in Baserow. Gain total control; drive results. How it works Monitors Baserow 'Messages' table for 'Sent' status. Sends messages via WasenderAPI. Logs outbound details in Baserow. Who's it for For teams dominating outbound WhatsApp and centralizing Baserow logging. Demand communication efficiency? This is your solution. Setup Steps Rapid implementation. Action plan: Activate all critical workflow nodes. Copy Sent_whatsapp webhook URL. Configure Baserow automation (on 'Sent' status) to trigger webhook. Ensure Baserow 'Messages' table includes 'Status' ('Sent' option), linked 'WhatsApp Number', and 'Message Content' fields. (Optional: Baserow Message Form for input). Embed WasenderAPI and Baserow API tokens in n8n Credentials. Security is non-negotiable. Requirements Active n8n instance (self-hosted/cloud). WasenderAPI.com trial/subscription. Baserow account with pre-configured 'Contacts' (link) and 'Messages' (link) tables.

Stephan KoningBy Stephan Koning
272

Generate pro construction quotes from jotform to email with Supabase CRM

Who it's for Construction and renovation businesses that need to generate detailed quotes from customer inquiries—plasterers, painters, contractors, renovation specialists, or any construction service provider handling quote requests through online forms. What it does Automatically transforms JotForm submissions into professional, itemized construction quotes with complete CRM tracking—no subscription needed (saving €200-500/year). When a customer fills your project request form (specifying wall/ceiling areas, finish types, ceiling heights, wet areas, prep work), the workflow extracts measurements, normalizes service selections, applies intelligent pricing rules from your Supabase catalog, calculates line items with material and labor costs plus proper VAT handling, stores everything in a structured CRM pipeline (customer → project deal → estimate), and generates a branded HTML email ready for delivery. This self-hosted pricing engine replaces paid invoicing software for quote generation, saving thousands over time while eliminating manual takeoffs and quote preparation— from 30-60 minutes to under 30 seconds. How it works Stage 1: JotForm webhook triggers → Parser extracts project data (m² measurements, service types, property details) → Normalize Dutch construction terms to database values → Save raw submission for audit trail Stage 2: Upsert customer record (idempotent on email) → Create project deal → Link to form submission Stage 3: Fetch active pricing rules → Calculate line items based on square meters, service type (smooth plaster vs decorative), ceiling height premiums, property status (new build vs renovation), wet area requirements → Apply conditional logic (high ceilings = price multiplier, prep work charges, finish level) → Group duplicate items → Save estimate header + individual lines Stage 4: Query optimized view (single call, all data) → Generate professional HTML email with logo, itemized services table (description, m², unit price, totals), VAT breakdown, CTA buttons, legal disclaimer Setup requirements Supabase account (free tier sufficient) - Database for CRM + pricing catalog JotForm account (free tier works) - Form builder with webhook support Email service - Gmail, SendGrid, or similar (add your own email node) How to set up Database setup (2 minutes): Run this workflow's "SQL Generator" node to output complete schema Copy output → Paste in Supabase SQL Editor → Click Run Creates 9 tables + 1 optimized view + sample construction services (plastering €21-32/m², painting €12-15/m², ornamental work, ceiling finishes) Credentials: Add Supabase credentials to n8n (Project URL + Service Role Key from Supabase Settings → API) No JotForm credentials needed (uses webhook) JotForm webhook: Clone demo construction form: [jotform stucco planet demo](https://form.jotform.com/252844786304060 )- Form fields: Property type, postcode, services needed, wall/ceiling m², finish level, ornament quantities, molding meters, wet areas, ceiling heights, prep removal, start date, customer contact Settings → Integrations → Webhooks → Add your n8n webhook URL Test with preview submission Customize email: Update company info in "Generate Email HTML" node (logo, business address, contact details, Chamber of Commerce number, VAT number) Adjust colors/branding in HTML template Available in Dutch and English versions How to customize Add your construction services: Edit price_catalog table in Supabase (no code changes): sql INSERT INTO pricecatalog (itemcode, name, unitprice, vatrate, unit_type) VALUES ('DRYWALL_INSTALL', 'Drywall Installation', 18.50, 9, 'm²');

Stephan KoningBy Stephan Koning
221

Classify & extract data from floorplans with Mistral AI OCR & JigsawStack

<section> <h2>🌊 What it Does</h2> <p> This workflow <strong>automatically classifies uploaded files</strong> (PDFs or images) as <span>floorplans</span> or <span>non‑floorplans</span>. It filters out junk files, then analyzes valid floorplans to extract <em>room sizes</em> and <em>measurements</em>. </p> <h2>👥 Who it’s For</h2> <p> Built for <strong>real estate platforms, property managers, and automation builders</strong> who need a trustworthy way to detect invalid uploads while quickly turning true floorplans into structured, reusable data. </p> <h2>⚙️ How it Works</h2> <ol> <li>User uploads a file (<code>PDF</code>, <code>JPG</code>, <code>PNG</code>, etc.).</li> <li>Workflow routes the file based on type for specialized processing.</li> <li>A two‑layer quality check is applied using heuristics and AI classification.</li> <li>A confidence score determines if the file is a valid floorplan.</li> <li>Valid floorplans are passed to a powerful OCR/AI for deep analysis.</li> <li>Results are returned as <strong>JSON</strong> and a user-friendly <strong>HTML table</strong>.</li> </ol> <h2>🧠 The Technology Behind the Demo</h2> <p> This MVP is a glimpse into a more advanced commercial system. It runs on a custom <strong>n8n workflow</strong> that leverages <strong>Mistral AI's</strong> latest OCR technology. Here’s what makes it powerful: </p> <ul> <li> <strong>Structured Data Extraction:</strong> The AI is forced to return data in a clean, predictable <code>JSON Schema</code>. This isn't just text scraping; it’s a reliable data pipeline. </li> <li> <strong>Intelligent Data Enrichment:</strong> The workflow doesn't just extract data—it enriches it. A custom script automatically calculates crucial metrics like <strong>wall surface area</strong> from the floor dimensions, even using fallback estimates if needed. </li> <li> <strong>Automated Aggregation:</strong> It goes beyond individual rooms by automatically calculating totals per floor level and per room type, providing immediate, actionable insights. </li> </ul> <p> While this demo shows the core classification and measurement (Step 1), the full commercial version includes <strong>Step 2 & 3 (Automated Offer Generation)</strong>, currently in use by a client in the construction industry. </p> <div> <a href="https://form0.app/forms/drTI6g" target="_blank"> Test the Live MVP </a> </div> <h2>📋 Requirements</h2> <ul> <li>Jigsaw Stack API Key</li> <li>n8n Instance</li> <li>Webhook Endpoint</li> </ul> <h2>🎨 Customization</h2> <p> Adjust thresholds, fine‑tune heuristics, or swap OCR providers to better match your business needs and downstream integrations. </p> </section>

Stephan KoningBy Stephan Koning
179

Compare LinkedIn profiles against job descriptions with Groq AI & GhostGenius

Recruiter Mirror is a proof‑of‑concept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiter‑ready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiter‑ready magnet. Got it ✅ — based on your workflow (Webhook → LinkedIn CV/JD fetch → GhostGenius API → n8n parsing/transform → Groq LLM → Output to Webhook), here’s a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: --- 🔧 Tools & APIs Required n8n (Automation Platform) Either n8n Cloud or self‑hosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedInCV) and job posting (LinkedInJD) links submitted by the user. Acts as the starting point for the workflow. GhostGenius API Endpoints Used: /v2/profile → Scrapes and returns structured CV/LinkedIn data. /v2/job → Scrapes and returns structured job description data. Auth: Requires valid credentials (e.g., API key / header auth). Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth: Groq account + saved API credentials in n8n. Code Node (JavaScript Transformation) Parses Groq’s JSON output safely (JSON.parse). Generates clean, recruiter‑ready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips n8n Native Nodes Set & Aggregate Nodes → Rebuild structured CV & JD objects. Merge Node → Combine CV data with job description for comparison. If Node → Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node → Sends back the final recruiter‑ready insights in JSON (or HTML). --- ⚠️ Important Notes Credentials: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is not production‑ready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).

Stephan KoningBy Stephan Koning
170

Prevent duplicate processing with Redis item state tracking

I built this tool because we faced a real, recurring problem: managing hundreds of client projects in a weekly automated loop. There was a time when a single error in that process could create a complete data mess, forcing us to manually clean and re-run everything. The Item Tracker was our solution. It proved that something simple, when used correctly, can be a game-changer for maintaining order and reliability in your workflows (at least it was for us). --- How the System Works: A Story of Order from Chaos Our main automation, which fetches and summarizes data, is where the heavy lifting happens. But its newfound stability comes from a simple, critical collaboration with the Item Tracker. It's like a two-step handshake that happens for every single project. Our main workflow starts by getting a long list of active projects. For each project, it first asks the Item Tracker: "Is this one already being worked on?" If the answer is no, the Item Tracker immediately puts a temporary "in-progress" note on the project Once our main workflow successfully completes its task for that project, it tells the Item Tracker to remove the "in-progress" note and set a "completed" note. This simple process is our safety net. If a task fails, that "in-progress" note will eventually disappear, allowing the system to confidently pick up and re-run only that specific item later. ++This saves us from having to start the entire job over from scratch.++ Key Components & Their Purpose Main Workflow: This is the primary automation that does the heavy lifting, like getting a list of projects and connecting to HubSpot. Item Tracker Utility: The smart part of the system. This separate tool keeps a simple record of what each project's status is at any given moment. Redis Database: This is the fast, central hub where all of the Item Tracker's notes are stored. It's the engine that makes the entire system reliable. --- The Item Tracker in Action: Your Digital To-Do List For beginners, the names of the tracking notes (called "keys") might seem confusing, but the idea is actually simple. Imagine a digital to-do list for every project. A key is just the project's name on that list. Every key has three parts that tell you everything you need to know: The Group: The first part groups all similar items together, like all your HubSpot projects. The ID: The middle part is the project's unique ID, so you know exactly which project you're talking about. The Status: The last part is a simple word that shows its status, like in_progress or completed. This simple naming system is the secret to keeping hundreds of projects organized, so you can easily see what's happening and what needs attention. --- Overall Business Value This solution directly addresses the pain of large-scale automation failures. It gave us a new level of confidence in our automated processes. Instead of facing the chaos of a messy run, this system provides immediate visibility into which project failed and why. It eliminates the need for manual cleanup and allows us to confidently re-run a specific item without risking data corruption across the entire set. The result is a highly reliable and scalable process that saves time, reduces frustration, and maintains data integrity.

Stephan KoningBy Stephan Koning
164
All templates loaded