Gustavo’s The Business Automator

Gustavo’s The Business Automator

Share this post

Gustavo’s The Business Automator
Gustavo’s The Business Automator
How to Fine-Tuning ChatGPT with Python
Copy link
Facebook
Email
Notes
More

How to Fine-Tuning ChatGPT with Python

Gustavo De Felice's avatar
Gustavo De Felice
Aug 17, 2023
∙ Paid

Share this post

Gustavo’s The Business Automator
Gustavo’s The Business Automator
How to Fine-Tuning ChatGPT with Python
Copy link
Facebook
Email
Notes
More
Share

Fine-tuning an AI, especially in the context of machine learning and deep learning, refers to the process of taking a pre-trained model and further training it on a smaller, often more specific dataset.

This is done to adapt the model to a particular task without training it from scratch.

Gustavo’s The Business Automator is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Fine-Tuning-Ai

Photo by h heyerlein on Unsplash


What is Fine-Tuning and how we can use it?

there are infinite applications everything has an important dataset that can be Fine-Tuned. It means letting the machine learn and operate in a semi/full automation regime.

E-commerce Product Recommendations:

  • Original Task: A recommendation system trained using browsing patterns on a large online platform.

  • Fine-tuned Task: Adapt the model for a niche online store, fine-tuning it using the smaller shop's user browsing and purchasing data to provide better product recommendations.

Sentiment Analysis:

  • Original Task: A model trained on general text data, like Wikipedia articles or news stories, to understand sentence structures and grammar.

  • Fine-tuned Task: Use this model to classify customer reviews as positive, negative, or neutral. Fine-tuning can adapt the model to the nuances and specific vocabulary of reviews.

Autonomous Vehicles:

  • Original Task: A model trained on general driving data from various environments.

  • Fine-tuned Task: Adapt the model for specific conditions, like winter driving or navigating in a particular city's streets

Agriculture (Drone Imagery):

  • Original Task: A model trained to identify objects in standard photos.

  • Fine-tuned Task: Adapt the model to process drone-captured images of farmland to detect signs of drought, pest damage, or other agricultural issues.

Face Recognition:

  • Original Task: A neural network trained to recognize and categorize objects in photos.

  • Fine-tuned Task: Adapt the model to recognize individual human faces. This could be applied in security systems, photo tagging apps, or other identity verification tools.

Speech Recognition in Noisy Environments:

  • Original Task: A model trained to transcribe clean and clear spoken sentences.

  • Fine-tuned Task: Fine-tune the model using data collected in noisy environments, like crowded places or cars. This helps in improving voice assistants' accuracy in real-world noisy scenarios.

Something more “simple” and near to our Digital Life

The truth is, we can use Fine-Tuning for more actual applications where the realisation is near our now-days work, for example:

Website Personalization:

  • Original Task: A model trained to predict user behaviours on a generic e-commerce website.

  • Fine-tuned Task: Adapt the model for a specialized online store, like one selling vintage clothes, to personalize the user experience based on browsing habits, previous purchases, and clicked items.

Chatbots for Sales and Tech Support:

  • Original Task: A general-purpose chatbot trained to understand and respond to common user queries.

  • Fine-tuned Task: Fine-tune the bot to handle queries specific to a certain software application, hardware product, or service, offering detailed tech support.

Digital Content Creation:

  • Original Task: A neural network trained to generate general art or music.

  • Fine-tuned Task: Fine-tune the model to generate content in a specific style, like 8-bit video game music or a particular artist's painting style.

Content Moderation for Specific Platforms:

  • Original Task: A model trained for general content moderation to detect and filter inappropriate text or images.

  • Fine-tuned Task: Adapt the model for a specific social media platform or online community where the definition of "inappropriate" might vary, or where there are platform-specific nuances.


Build a Fine-Tuning model

in this case, I used Python and Colaboratory

Colaboratory, often referred to as "Colab," is a free, cloud-based platform provided by Google that allows you to write and execute Python code in a web browser.

It offers an environment similar to Jupyter Notebooks, which is a popular tool among data scientists and researchers for creating and sharing documents that contain live code, equations, visualizations, and narrative text.

Colab is used to test and stage new applications, but in our case can be used to connect to the OpenAi platform, create our first environment and start to dialogue with the Machine.

.No Setup Required: You can run Python code directly in your browser without any setup.

Free Access to GPUs: Colab provides free GPU (Graphics Processing Unit) and TPU (Tensor Processing Unit) access, making it a popular choice for machine learning and deep learning experiments.

Integration with Google Drive: Colab is integrated with Google Drive. It allows you to save your work directly to your Google Drive, share your work with others, and access your notebook from any device.

Support for Various Libraries and Frameworks: Colab supports many popular machine learning libraries and frameworks such as TensorFlow, PyTorch, Keras, and OpenCV, making it perfect for machine learning tasks.

Interactive Tutorials: Google Colab is used by many organizations and individuals for creating interactive tutorials and guides. It's particularly popular in the machine learning community for this purpose.

External Data Integration: You can easily load data from various sources into Colab, including Google Drive, GitHub, and other external sources.

For the next steps, I already intend your familiarity with Colab


First Step: Install OpenAI

As the first command, we need to install the OpenAI library and connect to the ambient

!pip install --upgrade openai

import os
import openai
openai.organization = "your organization name"
openai.api_key = "your api key"

Second Step: import Libraries

import re
import json
import pandas as pd
import openai
import string
import requests
import random
  1. re:

    • Description: re stands for "regular expression". This module provides functions to work with regular expressions, which are a powerful way of searching, matching, and manipulating strings.

    • Common Uses: Text searching, data extraction, data validation, and string manipulation.

  2. json:

    • Description: json module allows you to encode and decode JSON format. JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy for humans to read and write and easy for machines to parse and generate.

    • Common Uses: Reading and writing JSON files, parsing JSON responses from APIs, converting Python objects to JSON format and vice versa.

  3. pandas (as pd):

    • Description: pandas is a powerful data manipulation and analysis library. The library's primary data structures, DataFrame and Series, facilitate operations on structured data.

    • Common Uses: Data cleaning, transformation, analysis, and visualization. Reading/writing data from/to various file formats like CSV, Excel, SQL databases, and more.

  4. openai:

    • Description: openai is the official Python client for the OpenAI API. It allows developers to access models provided by OpenAI (like GPT-3) directly from Python.

    • Common Uses: Generating text, answering questions, creating summaries, and various other tasks that leverage OpenAI's models.

  5. string:

    • Description: The string module provides common string operations and properties. While Python already supports string operations natively, this module provides some additional utilities.

    • Common Uses: Defining string constants, string preprocessing, and using helper functions like string.capwords().

  6. requests:

    • Description: requests is a popular Python library for making HTTP requests. It abstracts the complexities of making requests behind simple API methods.

    • Common Uses: Interacting with RESTful APIs, downloading files and web content, sending data to online servers, and more.

  7. random:

    • Description: random provides functions to generate random numbers and select random elements.

    • Common Uses: Generating random numbers, shuffling lists, selecting random elements from a list, and simulating randomness in various applications.


Third Step: Data Preparation Tool

!openai tools fine_tunes.prepare_data -f data_file_name.json

This command will send to OpenAI the “training data”, in essence, we are sending the information the model learns from.

The quality and quantity of training data play a crucial role in determining the performance and accuracy of a machine-learning model.

The tool is accepting different formats, with the only requirement that they contain a prompt and a completion column/key.
You can pass a CSV, TSV, XLSX, JSON or JSONL file, and it will save the output into a JSONL file ready for fine-tuning, after guiding you through the process of suggested changes.

Obviously, I recommend the Json Format.

The Json needs this example format:

{"prompt":"Who are the founding members of Metallica?->","completion":" James Hetfield and Lars Ulrich.\n"}
{"prompt":"What was Metallica's debut album?->","completion":" Kill 'Em All released in 1983.\n"}
{"prompt":"Which Metallica album features the song 'Enter Sandman'?->","completion":" Metallica (often referred to as 'The Black Album').\n"}

The “shape” is formatted by a Prompt and a completion.
The prompt has a “→” after the question

Once ready with the Learning data, (OpenAi recommend starting with 100 objects) questions you need to upload the file in you your Colab environment using the left bar manager.

Once is uploaded, change the file name.json of the command.



Fourth Step: Send the data

Once the data are ready, and we are ready to send them, executing the command OpenAi, is getting charge of the file.

Now, OpenAi, analyses and converts the format if needed and creates a JSONL format.

This is what you should aspect:

Analyzing...

- Your JSON file appears to be in a JSONL format. Your file will be converted to JSONL format
- Your file contains 100 prompt-completion pairs
- Your data does not contain a common separator at the end of your prompts. Having a separator string appended to the end of the prompt makes it clearer to the fine-tuned model where the completion should begin. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples. If you intend to do open-ended generation, then you should leave the prompts empty
- All completions end with suffix `.\n`
- The completion should start with a whitespace character (` `). This tends to produce better results due to the tokenization we use. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details

Based on the analysis we will perform the following actions:
- [Necessary] Your format `JSON` will be converted to `JSONL`
- [Recommended] Add a suffix separator ` ->` to all prompts [Y/n]: n
- [Recommended] Add a whitespace character to the beginning of the completion [Y/n]: y


Your data will be written to a new JSONL file. Proceed [Y/n]: y

Wrote modified file to `metallicaBot_prepared.jsonl`
Feel free to take a look!

Now use that file when fine-tuning:
> openai api fine_tunes.create -t "metallicaBot_prepared.jsonl"

 Make sure to include `stop=[".\n"]` so that the generated texts ends at the expected place.
Once your model starts training, it'll approximately take 3.82 minutes to train a `curie` model, and less for `ada` and `babbage`. Queue will approximately take half an hour per job ahead of you.



Fifth Step: Create the fine-tuned model

here is the code:

%env OPENAI_API_KEY=INSERT-YOUR-API-KEY

!openai api fine_tunes.create -t "metallicaBot_prepared.jsonl" -m "davinci"

Send the code below, we create our model ready to be interpreted and interrogate.|
It is important to notice, the file will be the Jsonl format prepared by OpenAI and to choose the machine.

Today Fine-Tuning is not available in all the OpenAI models (not still GPT3.5 and 4. I suggest using Curie or Davinci.

In this case, I used the “davinci” model.

Executing the code (rem to add your API KEY), OpenAi:

env: OPENAI_API_KEY=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
Upload progress: 100% 11.8k/11.8k [00:00<00:00, 7.11Mit/s]
Uploaded file from metallicaBot_prepared.jsonl: file-JQwVEvmSl899orUMaLqTLJAM
Created fine-tune: ft-ElURaD1tQrA2Wn3sanRyKtxa
Streaming events until fine-tuning is complete...

(Ctrl-C will interrupt the stream, but not cancel the fine-tune)
[2023-08-16 14:58:47] Created fine-tune: ft-ElURaD1tQrA2Wn3sanRyKtxa

Stream interrupted (client disconnected).
To resume the stream, run:

  openai api fine_tunes.follow -i ft-ElURaD1tQrA2Wn3sanRyKtxa

When the job is done, it should display the name of the fine-tuned model.
in my case:

[2023-08-16 14:58:47] Created fine-tune: ft-ElURaD1tQrA2Wn3sanRyKtxa
[2023-08-16 15:00:10] Fine-tune costs $0.26
[2023-08-16 15:00:10] Fine-tune enqueued. Queue number: 1
[2023-08-16 15:01:15] Fine-tune is in the queue. Queue number: 0
[2023-08-16 15:01:39] Fine-tune started
[2023-08-16 15:03:44] Completed epoch 1/4
[2023-08-16 15:04:15] Completed epoch 2/4
[2023-08-16 15:04:45] Completed epoch 3/4
[2023-08-16 15:05:16] Completed epoch 4/4
[2023-08-16 15:05:56] Uploaded model: davinci:ft-websfarm-ltd-2023-08-16-15-05-55
[2023-08-16 15:05:57] Uploaded result file: file-aKxamLNsBpCwYzoMhZeH5cfl
[2023-08-16 15:05:57] Fine-tune succeeded

Job complete! Status: succeeded 🎉
Try out your fine-tuned model:

openai api completions.create -m davinci:ft-websfarm-ltd-2023-08-16-15-05-55 -p <YOUR_PROMPT>

Open AI should deliver the costs of the fine-tuning and the epoch completed and at then the name of the model to prompt.

in my case is davinci:ft-websfarm-ltd-2023-08-16-15-05-55


In the context of training machine learning models, especially deep learning models, an epoch refers to one complete forward and backward pass of all the training examples. For instance, if you have 1,000 training examples and you make one pass (both forwards and backwards) over all these examples while training a neural network, you've completed one epoch.

Sixth Step: PlayGrounds and Use the fine-tuned model

At this stage, you should test your new model, and we have more ways.
Obviously, the first is to continue to use Python and Colab, but actually, you will find your model in your OpenAi Chat Gpt backend, as Playground.

Playground example:

In my case,

playground openai


Open the playground, should you check the Complete mode; from the dropdown, choose the model created.

Continuing with Python

time to test without parameters:

%env OPENAI_API_KEY=YOUR API KEY

!openai api completions.create -m "davinci:ft-websfarm-ltd-2023-08-16-15-05-55" -p "who is robert trujillo?" 

who is robert trujillo? A: The bassist for Metallica as of 2014.

Colab

Continue to read here and download the source Code

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Gustavo De Felice
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More