Back to Catalog
Chris Carr

Chris Carr

Freelance workflow and AI developer based in Cardiff, Wales. I work with SMEs to automate processes and eliminate human-error.

Total Views25,023
Templates3

Templates by Chris Carr

Allow users to send a sequence of messages to an AI agent in Telegram

Use Case When creating chatbots that interface through applications such as Telegram and WhatsApp, users can often sends multiple shorter messages in quick succession, in place of a single, longer message. This workflow accounts for this behaviour. What it Does This workflow allows users to send several messages in quick succession, treating them as one coherent conversation instead of separate messages requiring individual responses. How it Works When messages arrive, they are stored in a Supabase PostgreSQL table The system waits briefly to see if additional messages arrive If no new messages arrive within the waiting period, all queued messages are: Combined and processed as a single conversation Responded to with one unified reply Deleted from the queue Setup Create a table in Supabase called messagequeue. It needs to have the following columns: userid (uint8), message (text), and message_id (uint8) Add your Telegram, Supabase, OpenAI, and PostgreSQL credentials Activate the workflow and test by sending multiple messages the Telegram bot in one go Wait ten seconds after which you will receive a single reply to all of your messages How to Modify it to Your Needs Change the value of Wait Amount in the Wait 10 Seconds node in order to to modify the buffering window Add a System Message to the AI Agent to tailor it to your specific use case Replace the OpenAI sub-node to use a different language model

Chris CarrBy Chris Carr
13921

Split test different agent prompts with Supabase and OpenAI

Split Test Agent Prompts with Supabase and OpenAI Use Case Oftentimes, it's useful to test different settings for a large language model in production against various metrics. Split testing is a good method for doing this. What it Does This workflow randomly assigns chat sessions to one of two prompts, the baseline and the alternative. The agent will use the same prompt for all interactions in that chat session. How it Works When messages arrive, a table containing information regarding session ID and which prompt to use is checked to see if the chat already exists If it does not, the session ID is added to the table and a prompt is randomly assigned These values are then used to generate a response Setup Create a table in Supabase called splittestsessions. It needs to have the following columns: sessionid (text) and showalternative (bool) Add your Supabase, OpenAI, and PostgreSQL credentials Modify the Define Path Values node to set the baseline and alternative prompt values. Activate the workflow and test by sending messages through n8n's inbuilt chat Experiment with different chat sessions to test see both prompts in action Next Steps Modify the workflow to test different LLM settings such as temperature Add a method to measure the efficacy of the two alternative prompts

Chris CarrBy Chris Carr
8051

Avoid Asking Redundant Questions with Dynamically Generated Forms using OpenAI

Avoid Asking Redundant Questions with Dynamically Generated Forms using OpenAI Target Audience This workflow has been built for those who require a form to capture as much data as possible as well as the answers to predefined questions, whilst optimising the user experience by avoiding asking redundant questions. Use Case When creating a form to capture information, it can be useful to give the user an opportunity to input a long answer to a large, open-ended question. We then want to drill down to answer specific questions that we require the answer to. When doing this, we don't want to ask duplicate questions. This particular scenario imagines an AI consultancy capturing leads. What it Does This workflow requires users to input basic information and then answer an open ended question. The specific questions on the next page will only be those that weren't answered in the open-ended question. How it Works The open-ended question (and relevant basic information) is analysed by an LLM to determine which specific questions have not been answered. Chain-of-thought reasoning is utilised and the output structure is specified with the Structured Output Parser. Those questions that have already been answered are filtered out nodes. The remaining items are then used to generate the last page of the form. Once the user has filled in the final page of the form, they are shown a form completion page. Setup Add your OpenAI credentials Go to the Get Basic Information node and click Test Step Complete the form to test the generic use case Modify the prompt in Analyse Response to fit your use case Next Steps Add additional nodes to send an email to the form owner Add a subsequent LLM call to analyse the form response - those that are qualified should be given the opportunity to book an appointment

Chris CarrBy Chris Carr
3051
All templates loaded