Table of Contents

Fine-Tune GPT3 with Postman

Abby Updated by Abby

In this tutorial we'll explain how you can fine-tune your GPT3 model only using Postman

Keep in mind that OpenAI charges for fine-tuning, so you'll need to be aware of the tokens you are willing to use, you can check out their pricing here

In this example we'll train the Davinci model, if you'd like you can train a different (cheaper) model, but for our chatbot use case, the Davinci is the best option

Creating learning data

In order to train the model to our needs we'll need to create data to feed it, the data should be a collection of key-value/prompt-completion pairs

The prompt is the question and the completion is the answer we want the bot to respond with

Ideally it should have at least a couple hundred prompt and completion pairs, any less than 100 and it won't work (hint: use GPT to create prompts and completions based on existing documentation)

The data needs to follow this format: prompt:"what are your hours?" completion:"9am-5pm", the prompts and completions should also have stopping points, here's more information

Here's a simplified example of what that would look like with Google Sheets:

It can be written in any of the following file formats:

Converting the data

OpenAI requires us to upload the data in JSONL format, we'll use this platform to generate a JSONL file that we'll save and upload via Postman

You'll need to remove any line breaks, or you'll get an error response when sending the file via Postman

Here's an idea of what the simplified example will look like once converted into JSONL format:

Setting up Postman

Now that we have our JSONL file, let's go to Postman and create an account

We'll then create a new workspace

Now we'll click on: create new HTTP request

Uploading the data

Now that we have an HTTP request ready, we should select 'POST' from the drop-down

We'll make a call to the following endpoint: https://api.openai.com/v1/files

In the header we'll add our OpenAI API key

key Authorization value Bearer XXAPIXXKEYXX

Now, let's go to 'Body'

In the body, we'll select 'form-data'

There will be two keys, the first key 'purpose' value 'fine-tune'

The second key 'file' and for the value we'll upload our generated JSONL file

Let's press 'Send'

It should return a training file ID, we'll copy this as we'll need it for our next petition

Fine-Tuning

To fine tune our model, we'll upload the file ID that we received from the previous petition as well as specify the model we want to tune

While there are economic advantages to training Curie, it's not the best model for a chatbot use case
In this case, I've selected Davinci, keep in mind this is less capable than text-davinci out of the box, but when trained with a few hundred examples it becomes much more adept at answering questions for our use case
You can check this resource for further clarity

Now we need a new POST request, we'll call the following endpoint: https://api.openai.com/v1/fine-tunes

The headers will be the same as the previous petition

In the body we'll add the file ID that was returned to us in the previous payload, we'll also specify which model we want to train

{ "training_file": "file-XXXXXXXXXXX", "model": "davinci"}

Here's what our response should look like:

You'll notice the status is 'pending', it will take a several hours before it's resolved

Finished product

Once it's been resolved we'll see the model here:

Now we can test it with the following endpoint: https://api.openai.com/v1/completions

In the body we want to specify the model, prompt, tokens and temperature as well as the stop sequence

{ "model": "davinci:ft-your-personal-model-here", "prompt": "When are you open?", "temperature": 0, "max_tokens": 20, "stop":"\n"}

The response:

Now that it's working we can add this model directly to our bot

How did we do?

GPT-4 in Landbot

OpenAI

Contact