Build a Customer Service Bot with ChatGPT and Extract Information

Abby Updated by Abby

Exciting news for AI enthusiasts and developers! As of March 1st, the API for ChatGPT is now available to the public, and it includes the latest available version of the impressive language model. While we're already looking forward to the upcoming release of GPT-4, for now, ChatGPT 3.5-turbo is arousing a lot of interest. This version not only delivers results similar to the most powerful GPT-3 model (text-davinci-003), but it also does so x10 times cheaper, making it an incredibly cost-effective solution for you to incorporate AI language processing into your Landbots. Moreover, you can provide ChatGPT with the context of previous messages to enhance your chatting experience and have a better conversation.

In this post we will see how you can create a Landbot connected to the OpenAI API, which can have a free and natural conversation with the user, and will retrieve the information you need. We will imagine that you are a food delivery company that needs a customer support bot that attends to customers with delivery incidents. The only objective of the bot is to retrieve what is the incident about, the order number and the user email and finish the conversation when it accomplishes its goal. To do so we will need to adapt the input we send to the OpenAI API with what it expects to receive. Let’s see how to do that.

If you'd like you can skip ahead and download this template. Or, if you have a WhatsApp bot, or want to use this in an existing flow, you can import our brick:

⚠️ Disclaimer: Large language models (LLMs) such as GPT-3 have well known limitations. Due to their nature and how they are built, LLMs can generate inaccurate or inappropriate information. It is recommended to thoroughly test your bot and verify its outputs before implementing it in a business context. Be sure you are aware of all risks before using this template and use it under your own responsibility.

Let’s take a look to the template. Here's a basic overview of what the builder looks like:

Fortunately, downloading the template requires minimal modifications on our end. However, we will proceed by analyzing each block individually to understand its function. To begin, we'll focus on the first half of the flow:

After the Starting point block, we'll add a text question asking how we can help, and we'll save the answer of the user in a variable called @user_text.

We'll connect this text block to a Set Variable block with the following:

[{"role": "system", "content": "your role is to collect complaint information for a food delivery company, the user is writing to us because something went wrong you need to act as an information collector and find out what went wrong, make sure to get their order number and email address, your role is not to help find a solution, only to document what went wrong to then report it, once the information is collected reply with I have all the information I need and say goodbye, rules: only ask one question at a time, and don't provide support you are only an information collector"},{"role": "user", "content": "@user_text"}]

We'll save this in an array type variable called @message_object. This is the format that the API expects to receive the input. As you can see, this array has two elements: the first one ({"role”: “system”, “content”: "..."}) sets the context we will provide to ChatGPT and the second one ({"role”: “user”, “content”: "@user_text"}) is the first message from our user we previously stored.

Notice that by defining the role as “system” is how we say to the model that the content of that message must be used to provide the context we want. Here is where we set the behaviour of the AI and set the instructions we want it to follow. Take into account that the better and clearer instructions we give the better results we will obtain. For example in our case, by stating “your role is not to help find a solution” is how we try to limit hallucinations in its responses and force the model to focus on its goal.

Here's where we can change our use case, in this example we're building a bot for a food delivery company, and then saving the important information to later pass to our database. If you want a use case that better fits your needs you just have to set a different system content.

Next we'll add our OpenAI API key, you can find it here if you already have an account.

In the next Webhook block, we're sending the information to chatGPT and saving the response:

The variable @gpt_content is where we will store the text response we want to show to our user. We’re also saving a variable @gpt_role_message_object which has the following format: `{"role":"assistant", "content": "..."}`. This will make our lives easier in a moment.

In the Question Text, we want to display the response, so we will add the @gpt_content variable and save the user input in the variable @user_text.

We'll connect the Question Text to a Conditions block to check if @gpt_content contains any of the keywords that GPT will use to signal that it has all of the information.

In this case, we've instructed GPT to say 'I have all the information I need' and 'Goodbye' when it has all the required information, so we're checking if that's the case here.

Case 1: Information is not complete

In the case, that not all of the information has been collected, the flow will continue through the red output, where we'll create a loop

The first block after the red output is a Set Variable block, where we'll format the new user input as we need and save it in an array type variable called @user_role_message_object

The next step is to push both the AI response (@gpt_role_message_object already formatted) and the user last message we just wrapped (@user_role_message_object) to our message object that we'll send to GPT.

Push(Push(@message_object, @gpt_role_message_object), @user_role_message_object)

By doing that the @message_object variable will look like:

[
{"role": "system", "content": "<context and rules>"},
{"role": "user", "content": "<first user message>"},
{"role": "assistant ", "content": "<first AI message>"},
{"role": "user", "content": "<second user message>"}, ...
]

That way we can keep the full context of the conversation.

Keep in mind that it has a limit extension that if exceeded may cause the model to underperform so the loop may not last forever.

This then loops back to the webhook block.

Case 2: All information collected

If all the information has been collected correctly, we'll ask GPT to format it into a JSON object, this is completely optional.

To do so, we'll add a Set Variable block with the following:

{"role": "user", "content": "extract all information collected from the previous conversation in json format to be sent to a database with no extra text. The json must follow this structure: {"incident": "", "order_number": "", "email": ""}.}

We'll save that as an array type variable called @user_role_message_object.

Next, using a formula, we'll push this to the array that contains all the conversation the user has had.

Push(Push(@message_object, @gpt_role_message_object), @user_role_message_object)

This will be saved as @message_object as before.

We're then sending that to GPT with another webhook block, it will be identical to the previous webhook block, so we can just duplicate the previous one and add it here.

If everything worked as expected the variable @gpt_content will now store the information retrieved from the conversation with the expected format.

Here's an example of the JSON object with the information collected during the conversation:

How did we do?

Prompt Engineering for GPT-3

Contact