Table of Contents

Open AI block Overview

Desirée M Updated by Desirée M

Available starting from Professional Plan.

The Open AI block makes it easier than ever to connect your bot with Large Language Models (LLMs). It’s a major improvement for users who previously relied on the Webhook block to connect to OpenAI, offering more customization options and a much simpler way to build, test, and iterate.

Please remember this is primarily an operational option. If your goal is to provide users with a conversational AI experience, the best approach is to use our native AI Agent feature.

To use the Open AI block, you first need to collect the user’s input in fields. Then, you explain how the AI should use that input, and take its output to either share with the user or reuse elsewhere in your bot.

⚙️ Add AI Model

If you don't have an API Key, you can use the Default option, and your chats will be counted from your AI chats allowance:

If you have, for example, an Open AI block and an AI Agent within the same agent/flow, only one AI chat will be charged.

If you prefer to use your own API Key, here is where you can integrate it, select one of the available models, and reuse it anytime without setting it up again.

API Key name: If you’re using more than one API Key, give each one a clear name so you can easily identify and select it later.

API Key: Enter your API key here and press Verify to confirm it’s valid.

Model: Choose one of the available models from the list.

If everything is correct with your API Key and model selected, you'll see something like this:

🧩 Templates

To help you get started faster, you’ll find a set of ready-made templates for common use cases.

Each one includes a pre-filled configuration that you can adapt to your needs, saving you time while giving you a solid starting point.

💬 Prompt Setup

The Prompt section is where you define what the AI should do and what input it will receive. It’s divided into two parts: System and User.

System

This is where you write the instructions that guide the LLM’s behaviour: what it should do, how it should respond, and any specific rules it must follow.

You can expand the text area to make it easier to work with longer prompts, and even use saved fields from your bot to personalize the instructions dynamically.

User

This is the end user’s input: the message, text, or data that the model should process. For example, if you’re building a text summarizer, this is where you’ll add the field containing the user’s text to be summarized. You will use saved fields from your bot to pass the right data.

🧪 Test

Before saving, it’s important to test your setup.

Here you can enter test values and preview how the model responds.

You can iterate and fine-tune your prompt until you’re happy with the results, or even add multiple test values to compare outputs in one go.

Once you performed a test, you will be able to see its output right below:

💾 Store Data

Finally, you can choose to store the model’s output inside your bot.

Just create or select a text-type field to save the result.

Please note that only text fields are supported at the moment.

Use cases

Handle flexible user input

Some flows require users to type freely instead of clicking buttons. Free-text input can be unpredictable, since users may include typos, extra words, or unexpected phrasing. Keyword blocks alone often aren’t enough in these situations, because they rely on exact matches. Combining them with the Open AI block, though, can be truly powerful.

The OpenAI Block can act as a smart interpretation layer. It reads the user’s message, understands the intent, and outputs a structured value that your agent can use, even if the input contains mistakes or variations.

Example: Normalizing typos

Imagine you are building a travel assistant agent that asks users for their destination city. Users might type things like:

• “Looking for hotels in barcalona next weekend”

• “I need flights to londan tomorrow”

• “Going to nyc on Friday”

Using a Keyword Jump block here would be difficult, because there are countless possible city names and spelling mistakes. If you're planning on using conditions, you'd also have to deal with case-sensitive differences.

How to solve it with the OpenAI Block:

In this example, it could look like this:

1️⃣ User Input block: Collect the city from the user and store it in a field (e.g., @user_city_raw).

2️⃣ OpenAI Block: Use a prompt like:

“Read the user input (@user_city_raw) and return only the intended city name in English, properly capitalized. Correct any typos, ignore extra words, and if multiple cities match, pick the most internationally recognized or largest one. Store the result as normalized_city.”

3️⃣ Next step: Use the normalized_city field in a Condition or Keyword Jump block to confirm the city, and continue the flow.

This ensures that even messy input like “barcalona” or “londan” is correctly recognized as Barcelona or London.

Other possible uses

The same approach works whenever users type open-ended or unpredictable input, such as:

Product names: Correct misspelled product models (“ipone 14” → “iPhone 14”).

Company names: Recognize typos in brands (“microsfot” → “Microsoft”).

Landmarks or addresses: Normalize location input for booking or directions.

Email domains or usernames: Correct common typing mistakes (“gmial.com” → “gmail.com”).

In general, any time you want the agent to understand user intent from messy or flexible text, the OpenAI Block can convert it into structured, reliable variables your flow can use.

This article is still being updated, and we’d love to hear your thoughts, both about the content and the OpenAI block itself, which is also evolving with every iteration.
If you’d like to share feedback directly with our product team, click here: Open AI feedback

Was this article helpful?

Contact