Back to Catalog
Tushar Mishra

Tushar Mishra

Problem Solver | ServiceNow Architect | AI/ML Specialist | Product Builder 1. 10+ years in tech consulting and product development across AI, enterprise platforms, and cloud ecosystems. 2. ISB AMP in Business Analytics; strong foundation in strategy + data. 3. Founder – ReAcademy.ai: Flashcard-based learning SaaS using AI & LLMs to transform PDFs into gamified micro-learning.

Total Views3,382
Templates4

Templates by Tushar Mishra

Automate CVE monitoring with OpenAI processing for ServiceNow security incidents

This n8n workflow automatically fetches the latest CVE data at scheduled intervals, extracts relevant security details, and creates a corresponding Security Incident in ServiceNow for each new vulnerability. Schedule Trigger – Runs at predefined intervals. Jina Fetch – Retrieves the latest CVE feed. Information Extractor (OpenAI Chat Model) – Processes and extracts key details from the CVE data. Split Out – Separates each CVE entry for individual processing. Create Incident – Generates a ServiceNow Security Incident with the extracted CVE details. Ideal for security teams to ensure timely tracking and remediation of new vulnerabilities without manual monitoring.

Tushar MishraBy Tushar Mishra
969

Build a ServiceNow knowledge chatbot with OpenAI and Qdrant RAG

Data Ingestion Workflow (Left Panel – Pink Section) This part collects data from the ServiceNow Knowledge Article table, processes it into embeddings, and stores it in Qdrant. Steps: Trigger: When clicking ‘Execute workflow’ The workflow starts manually when you click Execute workflow* in n8n. Get Many Table Records Fetches multiple records from the ServiceNow Knowledge Article table. Each record typically contains knowledge article content that needs to be indexed. Default Data Loader Takes the fetched data and structures it into a format suitable for text splitting and embedding generation. Recursive Character Text Splitter Splits large text (e.g., long knowledge articles) into smaller, manageable chunks for embeddings. This step ensures that each text chunk can be properly processed by the embedding model. Embeddings OpenAI Uses OpenAI’s Embeddings API to convert each text chunk into a high-dimensional vector representation. These embeddings are essential for semantic search in the vector database. Qdrant Vector Store Stores the generated embeddings along with metadata (e.g., article ID, title) in the Qdrant vector database. This database will later be used for similarity searches during chatbot interactions. --- RAG Chatbot Workflow (Right Panel – Green Section) This section powers the Retrieval-Augmented Generation (RAG) chatbot that retrieves relevant information from Qdrant and responds intelligently. Steps: Trigger: When chat message received Starts when a user sends a chat message to the system. AI Agent Acts as the orchestrator, combining memory, tools, and LLM reasoning. Connects to the OpenAI Chat Model and Qdrant Vector Store. OpenAI Chat Model Processes user messages and generates responses, enriched with context retrieved from Qdrant. Simple Memory Stores conversational history or context to ensure continuity in multi-turn conversations. Qdrant Vector Store1 Performs a similarity search on stored embeddings using the user’s query. Retrieves the most relevant knowledge article chunks for the chatbot. Embeddings OpenAI Converts user query into embeddings for vector search in Qdrant.

Tushar MishraBy Tushar Mishra
928

Automate AI vulnerability monitoring with GPT-4 and ServiceNow incident creation

This n8n workflow automatically monitors RSS feeds for the latest AI vulnerability news, extracts key threat details, and creates a corresponding Security Incident in ServiceNow for each item. Schedule Trigger – Runs at scheduled intervals to check for updates. RSS Read – Fetches the latest AI vulnerability entries from the RSS feed. Read URL Content – Retrieves the full article for detailed analysis. Information Extractor (OpenAI Chat Model) – Parses and summarizes critical security information. Split Out – Processes each vulnerability alert separately. Create Incident – Generates a ServiceNow Security Incident with the extracted details. Ideal for security teams to track and respond quickly to emerging AI-related threats without manual feed monitoring.

Tushar MishraBy Tushar Mishra
910

AI-powered ServiceNow chat triage with GPT-4 — Incident & request router

Short description Automatically triage incoming chat messages into Incidents, Service Requests, or Other using an LLM-powered classifier; create Incidents in ServiceNow, submit Service Catalog requests (HTTP), and route everything else to an AI Agent with web search + memory. Includes an optional summarization step for ticket context. Full description This n8n template wires a chat trigger to an LLM-based Text Classifier and then routes messages to the appropriate downstream action: Trigger: When chat message received — incoming messages from your chat channel. Text Classifier: small LLM prompt/classifier that returns one of three labels: Incident, Request, or Everything Else. Create Incident (ServiceNow connector): when labeled Incident, the workflow creates a Servicenow Incident record (short fields: short\_description, description, priority, caller). Submit General Request (HTTP Request): when labeled Request, the workflow calls your Service Catalog API (POST) to place a catalog item / submit a request. AI Agent: when labeled Everything Else, route to an AI Agent node that: uses an OpenAI chat model for contextual replies, can consult SerpAPI (web search) as a tool, saves relevant context to Simple Memory for future conversations. Summarization Chain: optional chain to summarize long chat threads into concise ticket descriptions before creating incidents/requests. This template is ideal for support desks that want automated triage with human-quality context and searchable memory. Key highlights (what to call out) Three-way LLM triage: ensures messages are routed automatically to the correct backend action (Incident vs Service Request vs AI handling). ServiceNow native connector: uses the ServiceNow node to create Incidents (safer than raw HTTP for incidents). Service Catalog via HTTP: flexible — supports organizations using RESTful catalog endpoints. Summarization before ticket creation: produces concise, high-quality short_description and description fields. AI Agent + Memory + Web Search: handles non-ticket queries with web-augmented answers and stores context for follow-ups. Failover & logging: include a catch node (optional) that logs failures and notifies admins. Required credentials & inputs (must configure) ServiceNow: Instance URL + API user (must have rights to create incidents). Service Catalog HTTP endpoint: URL + API key / auth header (for POST). OpenAI API key (or other LLM provider): for Text Classifier, Summarization Chain, and AI Agent. SerpAPI key (optional): for web search tools inside the AI Agent. Memory store: Simple Memory node (or external DB) for conversation history. Nodes included (quick map) Trigger: When chat message received Processor: Text Classifier (OpenAI/LLM) Branch A: ServiceNow (Create Incident) Branch B: HTTP Request (Service Catalog POST) Branch C: AI Agent (OpenAI + SerpAPI + Simple Memory) Shared: Summarization Chain (used before A or B where enabled) Optional: Error / Audit logging node, Slack/email notifications Recommended n8n settings & tips Use structured outputs from classifier ({ label: "Incident", confidence: 0.92 }) so you can implement confidence thresholds. If confidence < 0.7, route to a human review queue instead of auto-creating a ticket. Sanitize user PII before storing in memory or sending to external APIs. Rate-limit OpenAI/SerpAPI calls to avoid unexpected bills. Test the Service Catalog POST body in Postman first — include sample variables JSON. Short sample variables JSON (Service Catalog POST) json { "sysparm_quantity": 1, "variables": { "description": "User reports VPN timeout on Windows machine; error code 1234" } }

Tushar MishraBy Tushar Mishra
575
All templates loaded