Joe Swink
Templates by Joe Swink
PDF proposal knowledge base with S3, OpenAI GPT-4o & Qdrant RAG agent
This template has a two part setup: Ingest PDF files from S3, extract text, chunk, embed with OpenAI embeddings, and index into a Qdrant collection with metadata. Provide a chat entry point that uses an Agent with OpenAI to retrieve from the same Qdrant collection as a tool and answer proposal knowledge questions. What it does Lists objects in an S3 bucket, loops through keys, downloads each file, and extracts text from PDFs. Chunks text and loads it into Qdrant with metadata for retrieval. Exposes a chat trigger wired to an Agent using an OpenAI chat model. Adds a retrieve as tool Qdrant node so the Agent can ground answers in the indexed corpus. Why it is useful Simple pattern for building a proposal or knowledge base from PDFs stored in S3. End to end path from ingestion to retrieval augmented answers. Easy to swap models or collections, and to extend with more tools. Setup notes Attach your own AWS credentials to the two S3 nodes and set your bucket name. Attach your Qdrant credentials to both Qdrant nodes and set your collection. Attach your OpenAI credentials to the embedding and chat nodes. The sanitized template uses placeholders for bucket and collection names.
Manage Appian tasks with Ollama Qwen LLM and Postgres memory
This workflow is a simple example of using n8n as an AI chat interface into Appian. It connects a local LLM, persistent memory, and API tools to demonstrate how an agent can interact with Appian tasks. What this workflow does Chat interface: Accepts user input through a webhook or chat trigger Local LLM (Ollama): Runs on qwen2.5:7b with an 8k context window Conversation memory: Stores chat history in Postgres, keyed by sessionId AI Agent node: Handles reasoning, follows system rules (helpful assistant persona, date formatting, iteration limits), and decides when to call tools Appian integration tools: List Tasks: Fetches a user’s tasks from Appian Create Task: Submits data for a new task in Appian (title, description, hours, cost) How it works A user sends a chat message The workflow normalizes fields such as text, username, and sessionId The AI Agent processes the message using Ollama and Postgres memory If the user asks about tasks, the agent calls the Appian APIs The result, either a task list or confirmation of a new task, is returned through the webhook Why this is useful Demonstrates how to build a basic Appian connector in n8n with an AI chat front end Shows how an LLM can decide when to call Appian APIs to list or create tasks Provides a pattern that can be extended with more Appian endpoints, different models, or custom system prompts