Table of Contents

Build a Dynamic Interface with AI

Abby Updated by Abby

When users face a chat interface, they want speed and flexibility—pre-filled AI buttons for quick replies, or the freedom to type their own response.

Here's how to implement a system that combines AI-powered quick-reply buttons with optional text input to maximize engagement.

We recommend first downloading the template, testing it and then using this article as a reference to customize it to your needs - you find the template here

The Flow

If you've previously used any workaround in Landbot with chatGPT this loop driven flow should look somewhat familiar, however, if this is your first time don't panic! We'll break it down.

How It Works: Hybrid Input, users respond via: AI-generated buttons (fast) or Free text (flexible)

Both inputs feed into the same AI loop, pushing responses to a shared 'container' (object).

Starting point

As always, we'll start with a user input, here we've used a button but you could use a Text Input block instead

We're also going to include the fixed text in the AI prompt, so it knows that the interaction has already begun

Let's set some fields

  1. Your OpenAI API key - this will be saved in a string called api_key
  2. The message object - this will be saved in an array called message_object

    This is a set of formatted instructions that we send to OpenAI
    Here we include the prompt, the user input and the chat history, while we recommend keeping the same format (n1. n2. etc) the instructions are completely up to you! In order to populate the buttons correctly they should remain in the format - ['first', 'second', 'etc'] you can have as many buttons as you want per question - or no buttons at all (free text)
    [ { "role": "system", "content": "Your role is to qualify leads for a project implementation service. Collect the following information in THIS ORDER, one question at a time. ONLY proceed to the next question after receiving an answer to the current one. NEVER repeat a question that's already been answered.\n\nQuestions to ask (with options where applicable):\n1. Budget Range: What is your budget range for this project? ['1000€ - 5000€', '5001€ - 10000€', 'Over 10000€']\n2. Project Details: Could you share some details about the project? (free text)\n3. Project Timeline: What's your expected timeline to start the project? ['Within 1 month', '1-3 months', 'More than 3 months']\n4. Company Size: How many employees does your company have? ['1-10', '11-50', '51+']\n5. Interest Level: How interested are you in our solutions? ['Just exploring', 'Interested and seeking more info', 'Ready to purchase']\n6. Decision-Making Role: What is your role in the decision-making process? ['Researcher', 'Influencer', 'Final decision-maker']\n7. Call Availability: Are you available for a call? ['yes','no']\n\nRULES:\n- Check chat history to see which questions have already been answered\n- Only ask the NEXT unanswered question in the sequence\n- After all questions are answered reply with: <span>exit</span> I have all the information I need, goodbye!\n- Never repeat questions\n- Never ask multiple questions at once\n- Don't provide support but answer simple questions, only collect information, you have asked if the user is ready and they have responded with - lets go!" }, { "role": "user", "content": "@user_text" } ]

Keep in mind that only the highlighted text should be changed (and no double quotes allowed inside!)

  1. The functions object - this will be saved in an array called functions_object

    The functions object is a secondary set of instructions that tells OpenAI to return the information from the prompt in a very specific format (that we can use to create buttons), we're also telling it that the first item that it returns should be the response and/or question and the rest of the items should be buttons
    [ { "name": "buttons", "description": "If the possible answers are already provided, return them in an array format also return the text of the question or response in content.", "parameters": { "type": "object", "properties": { "Arr": { "type": "array", "description": "Return possible answers in an array, with the question and response to last user input being the first item of the array. be friendly and always respond to previous user input, the rest of the items of the array should be the possible answers if its not an open question", "items": { "type": "string" } } }, "required": ["Arr"] } }]

This block should not be edited

Connecting to chatGPT

Now we can connect to chatGPT! Here's the endpoint: https://api.openai.com/v1/chat/completions It's a POST request

Here are the headers:

We'll need to customize the body adding the following:

{
"model": "gpt-4.1",
"messages": "@message_object",
"functions": "@functions_object",
"function_call": {"name": "buttons"}
}

This is what it will look like:

If you're creating the bot from scratch you will need to 'test the request' including the fixed values for both the message and functions objects (you can copy and paste the ones from earlier)

Otherwise you just need to click 'create' in the response section:

It's very important to keep the original formats (gpt_content = string, gpt_role_message_object & call = array)

Logic and technical blocks

Now that we have our data we need to run it through some blocks before we can display it

  1. Block 1: Extract question/response with Formulas
    Remember when we told chatGPT in the prompt that the first item of the array should always be the response or question? Here's how we can access that response: GetValue(GetValue(ToJSON(@call),'Arr'),0)
    We're converting the response into a valid JSON object, then extracting the first item, we should save the output as a string called gpt_content
  2. Block 2: Check if flow is complete if not continue to block 3
    We instructed chatGPT to respond with exit OR goodbye when all of the information is collected, so here with a Conditions block, we're checking if gpt_content contains either keyword
    If all of the information is collected it will go through the green output (see flow complete), otherwise it will go through the red output (see flow not complete)
    Case: Flow complete:
    1. Remove text input element with a Code block (not Code Set!)
      We'll use the following Javascript to remove the text input:
      let landbotScope = this;
      landbotScope.window.document.getElementsByClassName("input-container")[0].style="display:none"
    2. Say goodbye
  3. Block 3: Extract buttons
    In the case that we will have information to collect, we'll use a Formula block to extract the buttons
    Slice(GetValue(ToJSON(@call),'Arr'),1)
    This formula gets all of the elements of the array after the first item (the question), the output should be saved as an array called json_array
  4. Block 4: Display text input element
    The text input element is currently hidden (we don't want it to be visible before they even interact with the bot) so let's display it now by changing display:none to display:block
    This will also check if the user is using desktop or mobile, if the device is less than 767px it will add a button to the text input
    You'll need a Code block (not Code Set!) with the following JS:
    let landbotScope = this;
    // Function to check if mobile
    function isMobile() {
    return window.matchMedia("(max-width: 767px)").matches;
    }
    //Add btn if mobile
    landbotScope.window.document.getElementsByClassName("send-button")[0].style.display =
    isMobile() ? "flex" : "none";

    //Add element anyways
    landbotScope.window.document.getElementsByClassName("input-container")[0].style.display = "flex";
Displaying the buttons

Now we can (finally) display our buttons! (or just the question if there are no buttons)

We'll need a Dynamic Data block, the text displayed will be the question/response we extracted (gpt_content):

In 'Select Array to iterate' we'll select json_array which is an array of strings, we'll show the data as buttons and save the user selection as user_text

Text input flow

Now, what happens if the user inputs a text rather than selecting a button? Without getting into too much detail, a global keyword called _input_ will trigger

When this triggers, the Dynamic Data block will dissapear and we'll need to save the user's input before looping back into the flow

So, in order to save the user's input we'll need a Code Set (Not Code!) block with the following JS

return window.art_input;

Important! We need to save the output of this block as the variable user_text

Looping back

Now we can loop back to the AI! In order to have a continuous conversation, we'll need to push our responses into a 'container' (object) using a Set a field block and a Formulas block

We'll connect both the Dynamic Data block and the Code Set block to the Set a field block

In the Set a field block we'll add the following: {"role": "user", "content": "@user_text"}

The output should be saved as an array called user_role_message_object

Now we can push that into our container with a Formulas block

Here's the formula: Push(Push(@message_object, @gpt_role_message_object), @user_role_message_object)

The output is the message_object array

We'll link this back to the webhook block!

Custom Code

The custom code should not be customized, you just need to copy and paste!

You can find this section in the Design section of your bot in 'Custom code'

The CSS

In the Add CSS section paste the following:

/* Hide Timestamp */
.MessageDate {
display: none;
}

/* Input Container (includes button in mobile) */
.input-container {
display: none;
position: fixed;
bottom: 1px;
left: 0;
right: 0;
z-index: 9999999;
width: 100%;
max-width: 100%;
background: #ffffff;
border-top: 1px solid #e0e0e0;
box-shadow: 0 -2px 10px rgba(0, 0, 0, 0.05);
box-sizing: border-box;
align-items: center;
}

/* Input Element */
.input_dynamic {
flex: 1;
height: 80px;
padding: 12px 16px;
padding-right: 8px; /* Make room for button */
font-size: 16px;
border: none;
background: transparent;
outline: none;
box-sizing: border-box;
transition: all 0.3s ease;
text-align: center;
}

/*Top border and color change on input focus*/
.input-container:focus-within {
border-top-color: #FF3E7B;
background: #f8f9fa;
}

/* Input Styling for Embed */
.LandbotLivechat .frame-content .input-container{
transform: scale(0.94);
}
.LandbotContainer .frame-content .input-container {
transform: scale(.998);
}
.LandbotPopup .frame-content .input-container {
transform: scale(0.98);
}
/* Extra margin between buttons and input for mobile users */
@media (max-width: 767px) {
.input-buttons{
margin-bottom: 140px!important;
}
}

/* Button for mobile users */
.send-button {
background: none;
border: none;
cursor: pointer;
padding: 0 16px 0 8px;
height: 100%;
display: none;
align-items: center;
justify-content: center;
}

.send-button svg {
width: 24px;
height: 24px;
transition: transform 0.2s ease;
}

.send-button:active svg {
transform: scale(0.9);
}

/* Hide button when input is hidden */
.input_dynamic:not([style*="display: none"]) + .send-button {
display: flex;
}

The JS

In the Add JS section paste the following:

 <script>
let landbotScope = this;
window.art_input;
landbotScope.onLoad (function () {
landbotScope.window.setInput = function (u_i) {
window.art_input = u_i
landbotScope.sendMessage({
type: 'button',
message: u_i,
payload: '_input_',
});
}
});
</script>
<div class="input-container">
<input class="input_dynamic" type="text" autocomplete="off" id="input_dynamic" placeholder="Type a message.."
onkeypress="if(event.keyCode === 13){ setInput(this.value); this.value = ''; this.blur(); }"/>
<button class="send-button"
onclick="setInput(document.getElementById('input_dynamic').value); document.getElementById('input_dynamic').value = '';">
<svg viewBox="0 0 24 24" fill="none" xmlns="https://www.w3.org/2000/svg">
<path d="M22 2L11 13" stroke="#FF3E7B" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M22 2L15 22L11 13L2 9L22 2Z" stroke="#FF3E7B" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
</button>
</div>

Extra: Saving response as JSON object

After we say goodbye we can ask chatGPT to store our data in a JSON object in order to send it to our database

Here's how:

We'll connect the goodbye message to the Set a field message with the following:

 {"role": "user", "content": "provide all information collected in json format to be sent to a database with no extra text"}

This will be saved in the array user_role_message_object

Next in formulas we'll push this to our container, use this formula: Push(Push(@message_object, @gpt_role_message_object), @user_role_message_object)

This should be stored as message_object

Next we'll connect to another Set a field to provide further instructions on how to format the data:

 [ { "name": "buttons", "description": "provide all of the information collected in json format", "parameters": { "type": "object", "properties": { "Responses": { "type": "object", "description": "return an object with keys for each question and the user's response as the value", "properties": { "Budget Range": { "type": "string" }, "Project Details": { "type": "string" }, "Project Timeline": { "type": "string" }, "Company Size": { "type": "string" }, "Interest Level": { "type": "string" }, "Decision-Making Role": { "type": "string" }, "Call Availability": { "type": "string" } }, "required": [ "Budget Range", "Project Details", "Project Timeline", "Company Size", "Interest Level", "Decision-Making Role", "Call Availability" ] } }, "required": ["Responses"] } } ]

This should be stored as the array functions_object

Now we can connect to our webhook with the same set up as before (just copy and paste the previous webhook)

Lastly, we'll extract the object and make sure it's valid JSON, we'll do that with a Formula

GetValue(ToJSON(@call), 'Responses')

You can choose the name here, but it should be an array

That's it! Good luck

Changing variable names could result in unwanted consequences!!

Was this article helpful?

Build a Chatbot with DeepSeek

Contact