This document describes how to define a supervised fine-tuning dataset for a Gemini model. You can tune text, image, audio, and document data types.
About supervised fine-tuning datasets
A supervised fine-tuning dataset is used to fine-tune a pre-trained model to a specific task or domain. The input data should be similar to what you expect the model to encounter in real-world use. The output labels should represent the correct answers or outcomes for each input.
Training dataset
To tune a model, you provide a training dataset. For best results, we recommend that you start with 100 examples. You can scale up to thousands of examples if needed. The quality of the dataset is far more important than the quantity.
Limitations:
- Max input and out token per examples: 32,000
- Max file size of training dataset: Up to 1GB for JSONL.
Validation dataset
We strongly recommend that you provide a validation dataset. A validation dataset helps you measure the effectiveness of a tuning job.
Limitations:
- Max input and out token per examples: 32,000
- Max numbers of examples in validation dataset: 256
- Max file size of training dataset: Up to 1GB for JSONL.
Dataset format
Your model tuning dataset must be in the JSON Lines (JSONL) format, where each line contains a single tuning example. Before tuning your model, you must upload your dataset to a Cloud Storage bucket.
Dataset example for gemini-1.5-pro
and gemini-1.5-flash
{
"systemInstruction": {
"role": string,
"parts": [
{
"text": string
}
]
},
"contents": [
{
"role": string,
"parts": [
{
// Union field data can be only one of the following:
"text": string,
"fileData": {
"mimeType": string,
"fileUri": string
}
}
]
}
]
}
Parameters
The example contains data with the following parameters:
Parameters | |
---|---|
|
Required: The content of the current conversation with the model. For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history and the latest request. |
|
Optional: Available for Instructions for the model to steer it toward better performance. For example, "Answer as concisely as possible" or "Don't use technical terms in your response". The The |
Contents
The base structured data type containing multi-part content of a message.
This class consists of two main properties: role
and parts
. The role
property
denotes the individual producing the content, while the parts
property contains
multiple elements, each representing a segment of data within a message.
Parameters | |
---|---|
|
Optional: The identity of the entity that creates the message. The following values are supported:
The For non-multi-turn conversations, this field can be left blank or unset. |
|
A list of ordered parts that make up a single message. Different parts may have different IANA MIME types. For limits on the inputs, such as the maximum number of tokens or the number of images, see the model specifications on the Google models page. To compute the number of tokens in your request, see Get token count. |
Parts
A data type containing media that is part of a multi-part Content
message.
Parameters | |
---|---|
|
Optional: A text prompt or code snippet. |
|
Optional: Data stored in a file. |
Dataset example for Gemini 1.0 Pro
Each conversation example in a tuning dataset is composed of a required messages field.
The messages
field consists of an array of role-content pairs. The role
field
refers to the author of the message and is set to either system
, user
, or
model
. The system
role is optional and can only occur at the first element
of the messages list. The user
or model
roles are required and can repeat in
an alternating manner.
The content
field is the content
of the message.
For each example, the maximum token length for context
and messages
combined
is 32,768 tokens. Additionally, each content field for the model field shouldn't
exceed 8,192 tokens.
{
"messages": [
{
"role": string,
"content": string
}
]
}
Maintain consistency with production data
The examples in your datasets should match your expected production traffic. If your dataset contains specific formatting, keywords, instructions, or information, the production data should be formatted in the same way and contain the same instructions.
For example, if the examples in your dataset include a "question:"
and a
"context:"
, production traffic should also be formatted to include a
"question:"
and a "context:"
in the same order as it appears in the dataset
examples. If you exclude the context, the model will not recognize the pattern,
even if the exact question was in an example in the dataset.
Upload tuning datasets to Cloud Storage
To run a tuning job, you need to upload one or more datasets to a Cloud Storage bucket. You can either create a new Cloud Storage bucket or use an existing one to store dataset files. The region of the bucket doesn't matter, but we recommend that you use a bucket that's in the same Google Cloud project where you plan to tune your model.
After your bucket is ready, upload your dataset file to the bucket.
Follow the best practice of prompt design
Once you have your training dataset and you've trained the model, it's time to design prompts. It's important to follow the best practice of prompt design in your training dataset to give detailed description of the task to be performed and how the output should look like.
What's next
- Choose a region to tune a model.
- To learn how supervised fine-tuning can be used in a solution that builds a generative AI knowledge base, see Jump Start Solution: Generative AI knowledge base.