Automated resume screening & ranking with Llama 4 AI and Google Workspace
Target Audience You will find this workflow or template perfect if you are in the internal talent acquisition teams, recruitment agencies, HR professionals, and hiring managers seeking to bulk automate the initial screening of CVs and resumes. Eg. Automatically get result of candidate who has been shortlisted/rejected with its rationale and score automatically. By eliminating manual evaluation and screening, you get smart AI-Agent helping you to have standardized efficient, and scalable solution for handling large volumes of applications. With bulk automation, you can focus strategic decision-making rather than tedious screening tasks, ensuring a faster, more accurate, and fair hiring process. Key focus This workflow focusses on having a more organized file-folder management, trackable candidate cv, maintainable job description, autonomous ai-agent. Organized Folder-File Structure – CVs are automatically categorized based on their status, ensuring a structured workflow and easy retrieval Candidate Tracker – A real-time tracking system records the state of each CV, allowing recruiters to monitor the shortlisted, rejected, or KIV (Keep in View) candidates. AI Agent for Decision Automation – The AI autonomously orchestrates screening decisions, replacing manual LLM configurations with dynamic AI-driven evaluations for scalability and accuracy. Maintainable Job Description Management – A structured job description file ensures continuous updates, keeping hiring criteria flexible and aligned with recruitment needs. Email Notifications – The system automatically sends receipt confirmations upon processing completion, providing timely updates to recruiters. Features - Workflow Automated Resume Screening Workflow This workflow leverages Groq Llama4 for intelligent resume analysis, speeding the screening process by generating a matching score, result (shortlisted/rejected/kiv), and key insights/rationale into their suitability for provided job description. Step-by-Step Process: Monitors Google Drive: Listens and checks for new resume cv in google drive . Retrieve Resume: Downloads the CV resumes from google drive . Extract Resume Data: Extract text content from CV resume PDF files Extract Job Description Data: Extract text content from job description Analyze with Groq: Generate a matching score based on job requirements. [SCORE: 1-10] Provide decision into their job suitability. [SHORTLISTED/REJECTED/KIV] Provide actionable insights into their job suitability. [REASON] This ensures a fast, efficient, and accurate screening process, eliminating manual evaluation. Setup Guide Step-by-Step Instructions Ensure all credentials are ready and setup (groq, gdrive ,gmail, gsheet, gdoc) View official n8n documentation on node setup accordingly. See also the notes of setup . Folder & File Setup Create a google-drive folder like this View directory example Create a job description like this View file example Configure a tracker like this ( Candidate Name, AI Score,AI Verdict, AI Reason) View file example email conversations report as you like. You are ready to go!
Update all Zammad roles to default values
This n8n workflow allows you to reset all user roles in Zammad to specified default roles. It ensures consistency in role management across your Zammad instance. Features Retrieve all active users from Zammad. Update each user's roles to predefined default role IDs. Exclude specific users by their IDs from the update process. Simple configuration for default roles and excluded users. Usage Import the Workflow: Upload the provided .json file into your n8n instance. Configure Variables: zammadbaseurl: Your Zammad instance URL. zammadapikey: Your Zammad API key. default_roles: List of default role IDs to apply to all users. excludezammadusersbyid: List of user IDs to exclude from the update. Run the Workflow: Execute the workflow to update roles automatically. Issues and Suggestions For issues or suggestions, visit the GitHub Repository.
Calculate Embodied Carbon (CO2) for Revit/IFC Models using AI Classification
Estimate embodied carbon (CO2e) for grouped BIM/CAD elements. The workflow accepts an existing XLSX (grouped element data) or, if missing, can trigger a local RvtExporter.exe to generate one. It detects category fields, filters out non-building elements, infers aggregation rules with AI, computes CO2 using densities & emission factors, and exports a multi-sheet Excel plus a clean HTML report. What it does Reads or builds XLSX (from your model via RvtExporter.exe when needed). Finds category/volumetric fields; separates building vs. annotation elements. Uses AI to infer aggregation rules (sum/mean/first) per header. Groups rows by your group_by field and aggregates totals. Prepares enhanced prompts and calls your LLM to classify materials and estimate CO2 (A1-A3 minimum). Computes project totals and generates a multi-sheet XLSX + HTML report with charts and hotspots. Prerequisites LLM credentials for one provider (e.g., OpenAI, Anthropic, Gemini, Grok/OpenRouter). Enable one chat node and connect credentials. Windows host only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is not required. Optional: mapping/classifier files (XLSX/CSV/PDF) to improve material classification. How to use Import this JSON into n8n. Open the Setup/Parameters node(s) and set: projectfile — path to your .rvt/.ifc or to an existing grouped *rvt.xlsx pathtoconverter — C:\\DDCConverterRevit\\datadrivenlibs\\RvtExporter.exe (optional) group_by — e.g., Type Name / Category / IfcType sheet_name — default Summary (if reading from XLSX) Enable one LLM node and attach credentials; keep others disabled. Execute (Manual Trigger). The workflow detects/builds the XLSX, analyzes, classifies, estimates CO2, then writes Excel and opens the HTML report. Outputs Excel (CO2AnalysisReport_YYYY-MM-DD.xlsx, ~8 sheets): Executive Summary, All Elements, Material Summary, Category Analysis, Impact Analysis, Top 20 Hotspots, Data Quality, Recommendations. HTML: executive report with key KPIs and charts. Per-group fields include: Material (EU/DE/US), Quantity & Unit, Density, Mass, CO2 Factor, Total CO2 (kg/tonnes), CO2 %, Confidence, Assumptions. Notes & tips Input quantities (volumes/areas) are already aggregated per group — do not multiply by element count. Use -no-collada upstream if you only need XLSX in extraction. Prefer ASCII-safe paths and ensure write permissions to output folder. Categories Data Extraction · Files & Storage · ETL · CAD/BIM · Carbon/ESG Tags cad-bim, co2, carbon, embodied-carbon, lca, revit, ifc, xlsx, html-report, llm Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub