Back to Catalog

Bulk delete Slack messages with smart filtering and confirmations

Elie KattarElie Kattar
91 views
2/3/2026
Official Page

🧹 Jedi Cleaner for Slack

❌ Slack's Limitation

Slack lacks a native bulk delete feature. Users must delete messages manually, which is time-consuming and inefficient for large volumes.

βœ… Our Solution

Jedi Cleaner automates Slack message deletion using smart filters, safety confirmations, and enterprise-grade reliability.


πŸš€ Key Features & Capabilities

⚑ Lightning-Fast Bulk Operations

  • Delete hundreds of messages in minutes
  • Intelligent rate limiting prevents API throttling
  • Auto-retry on failure ensures reliable operation

🎯 Smart Message Filtering

  • Keyword match – Find messages with specific terms
  • User mentions – Target messages that mention specific users
  • Exact phrases – Use quotes for precision
  • Bot/User content – Handle mixed sources seamlessly

πŸ›‘οΈ Enterprise-Grade Safety

  • Confirmation workflow – No accidental deletions
  • Timeout – Requests expire after 5 minutes
  • Preview-first – Review messages before deletion
  • Granular control – Choose exactly what to delete

πŸ” Intelligent Search & Preview

  • Flexible filters (words, phrases, patterns)
  • Preview + count before deletion
  • Multi-channel support with context isolation

πŸ“Š Complete Workflow Management

  • Auto-cleanup of bot messages after execution
  • Real-time progress tracking
  • Debug logs and audit trail
  • Static data persistence and cleanup

πŸ§ͺ How It Works

Phase 1: Search Request

User: /cleanup "error messages"
Bot Response:
πŸ” Found 15 messages containing "error messages"
πŸ“Š Breakdown:
β€’ Will be permanently deleted
β€’ Cannot be undone
β€’ Respond within 5 minutes

βœ… Type: @cleaner_jedi yes
❌ Type: @cleaner_jedi cancel

Phase 2: User Confirmation

User: @cleaner_jedi yes
Bot: πŸ—‘οΈ Deleting 15 messages containing "error messages"

Phase 3: Completion

βœ… Cleanup Complete
β€’ 15 messages deleted
β€’ Channel cleaned
β€’ Auto-deletes this message in 5 seconds

πŸ› οΈ Technical Architecture

Core Components

  • Unified Webhook Handler Handles slash commands & mentions, deduplicates events
  • Search Engine Integrates Slack API, parses & filters search terms
  • Safety & Confirmation System Temporary storage, expiration logic, user auth
  • Bulk Operations Engine Message deletion, progress tracking, error handling
  • Cleanup & Memory Management Deletes bot messages, static data cleanup

πŸ“± User Commands

| Command | Description | Example | | ---------------------- | ----------------------------------- | -------------------------- | | /cleanup [term] | Search messages containing [term] | /cleanup "webhook error" | | @cleaner_jedi yes | Confirm deletion | After preview | | @cleaner_jedi cancel | Cancel pending deletion | Cancels active request |

🧩 Edge Cases

| Scenario | Bot Response | | -------------------- | ---------------------------------------------- | | No messages found | "No messages found containing '[term]'" | | Expired confirmation | "Request expired. Please run /cleanup again" | | No pending request | "No pending cleanup found. Run /cleanup first" | | Invalid search term | "Please provide a valid search term" |


βš™οΈ Setup Requirements

Slack App Configuration

OAuth Scopes:

βœ… chat:write
βœ… chat:write.public
βœ… channels:history
βœ… groups:history
βœ… app_mentions:read
βœ… commands

Event Subscriptions:

βœ… app_mention
βœ… message.channels

Slash Command:

Command: /cleanup
URL: https://your-n8n.app.cloud/webhook/cleanerjedi
Hint: [search term]

n8n Workflow Setup

Required Nodes:

  • Webhook Trigger – Captures Slack events
  • Respond to Webhook – Handles routing
  • Switch Node – Event type routing
  • Slack API Nodes – Search, delete, notify
  • JavaScript Nodes – Logic & validation

⚑ Advanced Features

🧠 Intelligent Deduplication

eventId = `cmd_${body.command}_${body.user_id}_${body.trigger_id}`;
if (staticData.recentEvents.includes(eventId)) {
  return []; // Skip duplicate
}

πŸ” Flexible Search Terms

  • Single words: test
  • Phrases: "error message"
  • Special characters: webhook-failed
  • Case-insensitive by default

♻️ Auto-Cleanup

setTimeout(() => {
  deleteMessage(completionMessage.ts);
  deleteMessage(progressMessage.ts);
}, 5000);

🧠 Storage Management

  • Temp storage for requests
  • Auto-expiry cleanup
  • Memory-safe event trimming

πŸ“Š Error Handling & Logging

Error Scenarios

❌ Invalid term
⏰ Expired request
🚫 Access denied
⚠️ Rate limit hit

Debugging & Monitoring

  • Event IDs, timestamps
  • Key-value storage info
  • API response codes

βœ… Usage Examples

Example 1: Delete Error Messages

/cleanup "error"
β†’ Bot: Found 23 messages
β†’ @cleaner_jedi yes
β†’ βœ… Deleted 23 messages

Example 2: Cancel Midway

/cleanup "from:@john"
β†’ Bot: Found 8 messages
β†’ @cleaner_jedi cancel
β†’ ❌ Operation cancelled

Example 3: Search by Date

/cleanup "2024-01-15"
β†’ Bot: Found 12 messages
β†’ @cleaner_jedi yes
β†’ βœ… 12 messages deleted

πŸ”’ Security & Safety

  • βœ… User confirmation required
  • βœ… 5-minute time limits
  • βœ… Permission-aware deletions
  • βœ… Audit trail logging

πŸš€ Performance Optimizations

  • Minimal webhook/API usage
  • Batch deletion
  • Cached recent events

Memory Handling:

  • Temp data cleanup
  • Key expiration
  • Cache trimming

πŸ“ˆ Monitoring & Analytics

Metrics Tracked

  • Messages per operation
  • User response times
  • Failure & error rates
  • Storage performance

Logging

console.log('βœ… SUCCESS:', operationDetails);
console.warn('⚠️ WARNING:', warningDetails);
console.error('❌ ERROR:', errorDetails);
console.info('ℹ️ INFO:', informationDetails);

πŸŽ‰ Summary

For Users

  • βœ… Simple commands
  • βœ… Preview + safety
  • βœ… Fast processing
  • βœ… Clear status feedback

For Admins

  • βœ… Lower API load
  • βœ… Full logging
  • βœ… Resilient to errors
  • βœ… Lightweight memory footprint

For Developers

  • βœ… Modular, clean code
  • βœ… Well-documented
  • βœ… Scalable & robust
  • βœ… Easy to extend

PDF how to do Document Included

Bulk Delete Slack Messages with Smart Filtering and Confirmations

This n8n workflow provides a robust solution for bulk deleting Slack messages. It goes beyond simple deletion by incorporating smart filtering capabilities and a confirmation step, allowing you to precisely target messages for removal and prevent accidental deletions.

What it does

This workflow is designed to give you fine-grained control over deleting messages in Slack:

  1. Initiates on demand: The workflow is triggered manually via a Webhook, allowing you to start the deletion process when needed.
  2. Sets up parameters: It defines initial parameters, likely including the channel ID, user ID, and a confirmation flag, which will be used throughout the workflow.
  3. Conditional Execution: It uses an "If" node to check a condition, most likely whether the confirm flag is set to true.
  4. Confirmation Step (if enabled): If confirmation is required, it sends a message to Slack, asking for explicit approval before proceeding with deletions.
  5. Processes Messages in Batches: It iterates through a list of messages (which would be provided as input to the workflow) in batches.
  6. Deletes Slack Messages: For each message in a batch, it performs the deletion action in Slack.
  7. Conditional Deletion: Another "If" node likely checks if the deletion was successful or if there are any errors.
  8. Handles Deletion Status: It uses a "Switch" node to route the workflow based on the outcome of the deletion, potentially logging successes or failures.
  9. Waits between deletions: A "Wait" node is included, likely to introduce a delay between deletion requests to avoid hitting Slack API rate limits.
  10. Custom Code Execution: A "Code" node is present, suggesting custom logic might be applied, possibly for advanced filtering, data manipulation, or error handling.

Prerequisites/Requirements

  • n8n Instance: A running n8n instance.
  • Slack Account: A Slack workspace and an app with the necessary permissions to read and delete messages (e.g., channels:history, chat:write, chat:delete).
  • Slack API Token: A Slack Bot User OAuth Token or User OAuth Token configured as a credential in n8n.

Setup/Usage

  1. Import the workflow: Download the provided JSON and import it into your n8n instance.
  2. Configure Credentials:
    • Set up your Slack API credentials in n8n.
  3. Activate the Workflow: Ensure the workflow is active.
  4. Trigger the Webhook:
    • To initiate the process, send a POST request to the Webhook URL provided by the "Webhook" node.
    • The request body should include the necessary parameters such as channelId, userId (of the user whose messages you want to delete), and a confirm flag (e.g., {"channelId": "C12345", "userId": "U67890", "confirm": true}).
    • If confirm is true, the workflow will send a confirmation message to Slack before proceeding with deletions.
    • The list of messages to be deleted needs to be passed as input to the workflow, likely as part of the Webhook data or fetched in a preceding node (not explicitly shown in this JSON, but implied by the "Loop Over Items" and "Slack" delete operations).
  5. Monitor and Confirm: If the confirmation step is enabled, monitor your designated Slack channel for the confirmation message and respond as required to proceed with the deletion.

Related Templates

Automate Dutch Public Procurement Data Collection with TenderNed

TenderNed Public Procurement What This Workflow Does This workflow automates the collection of public procurement data from TenderNed (the official Dutch tender platform). It: Fetches the latest tender publications from the TenderNed API Retrieves detailed information in both XML and JSON formats for each tender Parses and extracts key information like organization names, titles, descriptions, and reference numbers Filters results based on your custom criteria Stores the data in a database for easy querying and analysis Setup Instructions This template comes with sticky notes providing step-by-step instructions in Dutch and various query options you can customize. Prerequisites TenderNed API Access - Register at TenderNed for API credentials Configuration Steps Set up TenderNed credentials: Add HTTP Basic Auth credentials with your TenderNed API username and password Apply these credentials to the three HTTP Request nodes: "Tenderned Publicaties" "Haal XML Details" "Haal JSON Details" Customize filters: Modify the "Filter op ..." node to match your specific requirements Examples: specific organizations, contract values, regions, etc. How It Works Step 1: Trigger The workflow can be triggered either manually for testing or automatically on a daily schedule. Step 2: Fetch Publications Makes an API call to TenderNed to retrieve a list of recent publications (up to 100 per request). Step 3: Process & Split Extracts the tender array from the response and splits it into individual items for processing. Step 4: Fetch Details For each tender, the workflow makes two parallel API calls: XML endpoint - Retrieves the complete tender documentation in XML format JSON endpoint - Fetches metadata including reference numbers and keywords Step 5: Parse & Merge Parses the XML data and merges it with the JSON metadata and batch information into a single data structure. Step 6: Extract Fields Maps the raw API data to clean, structured fields including: Publication ID and date Organization name Tender title and description Reference numbers (kenmerk, TED number) Step 7: Filter Applies your custom filter criteria to focus on relevant tenders only. Step 8: Store Inserts the processed data into your database for storage and future analysis. Customization Tips Modify API Parameters In the "Tenderned Publicaties" node, you can adjust: offset: Starting position for pagination size: Number of results per request (max 100) Add query parameters for date ranges, status filters, etc. Add More Fields Extend the "Splits Alle Velden" node to extract additional fields from the XML/JSON data, such as: Contract value estimates Deadline dates CPV codes (procurement classification) Contact information Integrate Notifications Add a Slack, Email, or Discord node after the filter to get notified about new matching tenders. Incremental Updates Modify the workflow to only fetch new tenders by: Storing the last execution timestamp Adding date filters to the API query Only processing publications newer than the last run Troubleshooting No data returned? Verify your TenderNed API credentials are correct Check that you have setup youre filter proper Need help setting this up or interested in a complete tender analysis solution? Get in touch πŸ”— LinkedIn – Wessel Bulte

Wessel BulteBy Wessel Bulte
247

πŸŽ“ How to transform unstructured email data into structured format with AI agent

This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From – sender’s email address To – recipient’s email address Subject – email subject line Summary – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. --- Key Benefits βœ… Full Automation: Eliminates manual reading and data entry from incoming emails. βœ… Multi-Source Integration: Handles data from different email providers seamlessly. βœ… AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. βœ… Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. βœ… Time Efficiency: Processes emails in real time, improving productivity and response speed. *βœ… Scalability: Easily extendable to handle additional sources or extract more data fields. --- How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. --- Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. --- Need help customizing? Contact me for consulting and support or add me on Linkedin.

DavideBy Davide
1616

Document RAG & chat agent: Google Drive to Qdrant with Mistral OCR

Knowledge RAG & AI Chat Agent: Google Drive to Qdrant Description This workflow transforms a Google Drive folder into an intelligent, searchable knowledge base and provides a chat agent to query it. It’s composed of two distinct flows: An ingestion pipeline to process documents. A live chat agent that uses RAG (Retrieval-Augmented Generation) and optional web search to answer user questions. This system fully automates the creation of a β€œChat with your docs” solution and enhances it with external web-searching capabilities. --- Quick Implementation Steps Import the workflow JSON into your n8n instance. Set up credentials for Google Drive, Mistral AI, OpenAI, and Qdrant. Open the Web Search node and add your Tavily AI API key to the Authorization header. In the Google Drive (List Files) node, set the Folder ID you want to ingest. Run the workflow manually once to populate your Qdrant database (Flow 1). Activate the workflow to enable the chat trigger (Flow 2). Copy the public webhook URL from the When chat message received node and open it in a new tab to start chatting. --- What It Does The workflow is divided into two primary functions: Knowledge Base Ingestion (Manual Trigger) This flow populates your vector database. Scans Google Drive: Lists all files from a specified folder. Processes Files Individually: Downloads each file. Extracts Text via OCR: Uses Mistral AI OCR API for text extraction from PDFs, images, etc. Generates Smart Metadata: A Mistral LLM assigns metadata like documenttype, project, and assignedto. Chunks & Embeds: Text is cleaned, chunked, and embedded via OpenAI’s text-embedding-3-small model. Stores in Qdrant: Text chunks, embeddings, and metadata are stored in a Qdrant collection (docaiauto). AI Chat Agent (Chat Trigger) This flow powers the conversational interface. Handles User Queries: Triggered when a user sends a chat message. Internal RAG Retrieval: Searches Qdrant Vector Store first for answers. Web Search Fallback: If unavailable internally, the agent offers to perform a Tavily AI web search. Contextual Responses: Combines internal and external info for comprehensive answers. --- Who's It For Ideal for: Teams building internal AI knowledge bases from Google Drive. Developers creating AI-powered support, research, or onboarding bots. Organizations implementing RAG pipelines. Anyone making unstructured Google Drive documents searchable via chat. --- Requirements n8n instance (self-hosted or cloud). Google Drive Credentials (to list and download files). Mistral AI API Key (for OCR & metadata extraction). OpenAI API Key (for embeddings and chat LLM). Qdrant instance (cloud or self-hosted). Tavily AI API Key (for web search). --- How It Works The workflow runs two independent flows in parallel: Flow 1: Ingestion Pipeline (Manual Trigger) List Files: Fetch files from Google Drive using the Folder ID. Loop & Download: Each file is processed one by one. OCR Processing: Upload file to Mistral Retrieve signed URL Extract text using Mistral DOC OCR Metadata Extraction: Analyze text using a Mistral LLM. Text Cleaning & Chunking: Split into 1000-character chunks. Embeddings Creation: Use OpenAI embeddings. Vector Insertion: Push chunks + metadata into Qdrant. Flow 2: AI Chat Agent (Chat Trigger) Chat Trigger: Starts when a chat message is received. AI Agent: Uses OpenAI + Simple Memory to process context. RAG Retrieval: Queries Qdrant for related data. Decision Logic: Found β†’ Form answer. Not found β†’ Ask if user wants web search. Web Search: Performs Tavily web lookup. Final Response: Synthesizes internal + external info. --- How To Set Up Import the Workflow Upload the provided JSON into your n8n instance. Configure Credentials Create and assign: Google Drive β†’ Google Drive nodes Mistral AI β†’ Upload, Signed URL, DOC OCR, Cloud Chat Model OpenAI β†’ Embeddings + Chat Model nodes Qdrant β†’ Vector Store nodes Add Tavily API Key Open Web Search node β†’ Parameters β†’ Headers Add your key under Authorization (e.g., tvly-xxxx). Node Configuration Google Drive (List Files): Set Folder ID. Qdrant Nodes: Ensure same collection name (docaiauto). Run Ingestion (Flow 1) Click Test workflow to populate Qdrant with your Drive documents. Activate Chat (Flow 2) Toggle the workflow ON to enable real-time chat. Test Open the webhook URL and start chatting! --- How To Customize Change LLMs: Swap models in OpenAI or Mistral nodes (e.g., GPT-4o, Claude 3). Modify Prompts: Edit the system message in ai chat agent to alter tone or logic. Chunking Strategy: Adjust chunkSize and chunkOverlap in the Code node. Different Sources: Replace Google Drive with AWS S3, Local Folder, etc. Automate Updates: Add a Cron node for scheduled ingestion. Validation: Add post-processing steps after metadata extraction. Expand Tools: Add more functional nodes like Google Calendar or Calculator. --- Use Case Examples Internal HR Bot: Answer HR-related queries from stored policy docs. Tech Support Assistant: Retrieve troubleshooting steps for products. Research Assistant: Summarize and compare market reports. Project Management Bot: Query document ownership or project status. --- Troubleshooting Guide | Issue | Possible Solution | |------------|------------------------| | Chat agent doesn’t respond | Check OpenAI API key and model availability (e.g., gpt-4.1-mini). | | Known documents not found | Ensure ingestion flow ran and both Qdrant nodes use same collection name. | | OCR node fails | Verify Mistral API key and input file integrity. | | Web search not triggered | Re-check Tavily API key in Web Search node headers. | | Incorrect metadata | Tune Information Extractor prompt or use a stronger Mistral model. | --- Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. We can help you set it up for free β€” from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help. ---

DIGITAL BIZ TECHBy DIGITAL BIZ TECH
1409