Humble Turtle
Elegant AI agents that quietly do the work for you
Templates by Humble Turtle
Deploy code to GitHub with natural language via Slack & Claude 3.5
Github Deployer Agent Overview The Github Deployer Agent is an intelligent automation tool that integrates with Slack to streamline code deployment workflows. Powered by Anthropic's Claude 3.5 and Tavily for web search, it enables seamless, context-aware file pushes to a GitHub repository with minimal user input. Capabilities Accepts natural language via Slack Automatically pushes code to a default GitHub repository Uses Claude 3.5 for code generation and decision-making Leverages Tavily for real-time web search to enhance context Supports folder structure hints to ensure clean and organized repositories Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses GitHub Personal Access Token with repo write permissions Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Example Input From Slack, you can send messages like: "Generate a basic README.md for my Python project and store it in the root directory." Customising This Workflow You can tailor the workflow by: Modifying default folder paths or repository settings Integrate Jira node to use issue keys as default folder naming Add slack file upload option
Manage Jira issues with natural language via Telegram and GPT-4o
Manage Jira Issues with Natural Language via Telegram and GPT-4o Overview The Jira Agent is an AI-powered assistant that allows users to interact with Jira directly through messaging platform Telegram. It leverages OpenAI's GPT-4o model to interpret natural language commands and perform various Jira-related actions. On Telegram, it enables users to create Jira stories by triggering a guided form when prompted with "create story." Additionally, it provides more extensive functionality, including creating, updating, searching, and transitioning Jira issues through natural language commands. How it works Normal interaction Using messages as "Please give all my issues". Standardized process of creating stories: Message: "create story" Open the Form that Telegram responds back to you Fill in the essential story information in the form The story automatically gets created in your backlog. Required Connections To use the Jira Agent effectively, users need access to: A Telegram account, Telegram setup involves deploying the bot and starting a chat; story creation is triggered with a simple text command. A connected Jira workspace Permissions to create and modify Jira issue Access to GPT-4o API-key Detailed configuration instructions are provided in the workflow Setup Time <15 minutes Customising this workflow Try adding more details to the form for more complete Jira ticket creation. Try connecting a Google Calendar node to plan your work
Generate data pipeline blueprints with Claude 3.5, Slack, and Tavily Search
Architecture Agent Overview The Architect Agent listens to Slack messages and generates full data architecture blueprints in response. Powered by Claude 3.5 (Anthropic) for reasoning and design, and Tavily for real-time web search, this agent creates production-ready data pipeline scaffolds on-demand — transforming natural language prompts into structured data engineering solutions. Capabilities Understands and interprets user requests from Slack Designs end-to-end data pipelines architectures using industry best practices. Outputs include High-level architecture diagrams Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Setup time <15 minutes Example input: "Create a data pipeline orchestrated by Airflow, running on a Docker image. It should connect to a MySQL database, load in the data into a PostgreSQL DB (incremental load) and then transform the data into business-oriented tables also in the PostgreSQL database. Create an example setup with raw sales data." Customising this workflow Try saving outputs to Google Drive to store all your architecture blueprints
Hr policy retrieval using Slack, S3, and GPT-4.1-mini with RAG
HR Chatbot with RAG: Retrieve Company Policies via Slack, Amazon S3, and OpenAI Overview Answer HR and company policy questions via Slack, powered by a Knowledge Base of internal documents stored in S3. The assistant uses vector search and an OpenAI Chat Model to retrieve accurate answers. The HR Assistant is an AI-powered Slack bot that allows employees to ask questions in natural language and get accurate answers from company documentation. Documents are managed through an ingestion workflow that retrieves files from S3, transforms them into embeddings, and stores them in a vector database. On Slack, the assistant interprets questions, searches the Knowledge Base, and responds with concise and reliable answers, or clearly states when information isn’t available. How it works Normal interaction An employee asks a question in Slack (e.g., “How many vacation days do I have?”). The assistant checks the Knowledge Base (vector store). If relevant information is found, the assistant provides a clear answer. If not, it responds with: “That answer doesn’t appear to be covered in the materials I have access to.” Standardized process of Knowledge Base ingestion Trigger – The ingestion workflow is manually executed. S3 Download – Files are pulled from the company’s S3 bucket. Data Loader – Documents are pre-processed and split into chunks. Embeddings – Each chunk is converted into an embedding via OpenAI. Vector Store – Embeddings are stored in the Knowledge Base (vector DB). Chatbot Workflow – When questions arrive via Slack, the assistant queries the vector store to find the most relevant context before generating a response. Required Connections To use the HR Assistant effectively, you need: A Slack workspace, where the bot is installed and invited to relevant channels. An S3 bucket containing company documents (e.g., HR policies, procedures). Access to OpenAI API keys for both embeddings and chat models. Proper permissions to fetch documents from S3 and write to the vector store. A configured n8n instance with both ingestion and chatbot workflows. Setup Time ≈ 20–30 minutes (depending on number of documents and Slack integration). Customising this workflow Add more document sources (e.g., Google Drive, Confluence) in the ingestion pipeline. Expand the Slack integration to allow commands like /askHR or to restrict the bot to only respond when @mentioned. Add scheduled ingestion (instead of manual trigger) to automatically refresh the Knowledge Base from S3. Connect analytics nodes to monitor which HR topics employees ask most often.