This document describes how to tune a Gemini model by using supervised
fine-tuning. Before you begin, you must prepare a supervised fine-tuning dataset.
Depending on your use case, there are different requirements. The following Gemini models support supervised tuning: You can create a supervised fine-tuning job by using the Google Cloud console, the
Google Gen AI SDK, the Vertex AI SDK for Python, the REST API, or
Colab Enterprise: To tune a text model with supervised fine-tuning by using the Google Cloud console,
perform the following steps: In the Vertex AI section of the Google Cloud console, go to the
Vertex AI Studio page. Click Create tuned model. Under Model details, configure the following: Under Tuning setting, configure the following: Optional: To disable intermediate checkpoints and use only the latest
checkpoint, click the Export last checkpoint only toggle. Click Continue. The Tuning dataset page opens. To upload a dataset file, select one of the following: (Optional) To get validation metrics during training, click the
Enable model validation toggle. Click Start Tuning. Your new model appears under the Gemini Pro tuned models section on the
Tune and Distill page. When the model is finished tuning, the Status
says Succeeded. To create a model tuning job, send a POST request by using the
Before using any of the request data,
make the following replacements:
HTTP method and URL:
Request JSON body:
To send your request, choose one of these options:
Save the request body in a file named
Save the request body in a file named You should receive a JSON response similar to the following.
You can create a model tuning job in Vertex AI by using the
side panel in Colab Enterprise. The side panel
adds the relevant code snippets to your notebook. Then, you modify
the code snippets and run them to create your tuning job. To learn more
about using the side panel with your Vertex AI tuning jobs,
see Interact with Vertex AI
to tune a model.
In the Google Cloud console, go to
the Colab Enterprise My notebooks page.
In the Region menu, select the region that contains your notebook. Click the notebook that you want to open. If you haven't created a notebook yet,
create a notebook.
To the right of your notebook, in the side panel, click the
The side panel expands the Tuning tab. Click the Tune a Gemini model button.
Colab Enterprise adds code cells to your notebook for
tuning a Gemini model.
In your notebook, find the code cell that stores parameter values.
You'll use these parameters to interact with Vertex AI.
Update the values for the following parameters: In the next code cell, update the model tuning parameters: Run the code cells that the side panel added to your notebook.
After the last code cell runs, click the
The side panel shows information about your model tuning job.
After the tuning job has completed, you can go directly from
the Tuning details tab to a page where you can test your model.
Click Test.
The Google Cloud console opens to the Vertex AI
Text chat page, where you can test your model.
It's recommended to submit your first tuning job without changing the
hyperparameters. The default value is the recommended value based on our
benchmarking results to yield the best model output quality. For a discussion of best practices for supervised fine-tuning, see the blog post
Supervised Fine Tuning for Gemini: A best practices
guide. You can view a list of tuning jobs in your current project by using the
Google Cloud console, the Google Gen AI SDK, the Vertex AI SDK for Python, or by
sending a GET request by using the To view your tuning jobs in the Google Cloud console, go to the
Vertex AI Studio page. Your Gemini tuning jobs are listed in the table under the Gemini Pro tuned models
section. To view a list of model tuning jobs, send a GET request by using the
Before using any of the request data,
make the following replacements:
HTTP method and URL:
To send your request, choose one of these options:
Execute the following command:
Execute the following command:
You should receive a JSON response similar to the following. You can get the details of a tuning job in your current project by using the
Google Cloud console, the Google Gen AI SDK, the Vertex AI SDK for Python, or by
sending a GET request by using the To view details of a tuned model in the Google Cloud console, go to the
Vertex AI Studio page. In the Gemini Pro tuned models table, find your model and click
Details. The details of your model are shown. To view a list of model tuning jobs, send a GET request by using the
Before using any of the request data,
make the following replacements:
HTTP method and URL:
To send your request, choose one of these options:
Execute the following command:
Execute the following command:
You should receive a JSON response similar to the following. You can cancel a tuning job in your current project by using the Google Cloud console or
the Vertex AI SDK for Python, or by sending a POST request using the To view a list of model tuning jobs, send a GET request by using the
Before using any of the request data,
make the following replacements:
HTTP method and URL:
To send your request, choose one of these options:
Execute the following command:
Execute the following command:
You should receive a JSON response similar to the following. To cancel a tuning job in the Google Cloud console, go to the
Vertex AI Studio page. In the Gemini Pro tuned models table, click Click Cancel. You can interact with the tuned model endpoint the same way as base
Gemini by using the Vertex AI SDK for Python or the Google Gen AI SDK,
or by sending a POST request using the For thinking models like Gemini 2.5 Flash, we recommend to set the
thinking budget to 0 to turn off thinking on tuned tasks for optimal
performance and cost efficiency. During supervised fine-tuning, the model learns
to mimic the ground truth in tuning dataset, omitting the thinking process.
Therefore, tuned model is able to handle the task without thinking budget
effectively. The following example prompts a model with the question "Why is sky blue?". To view details of a tuned model in the Google Cloud console, go to the
Vertex AI Studio page. In the Gemini Pro tuned models table, select Test. It opens up a page where you can create a conversation with your tuned
model. To test a tuned model with a prompt, send a POST request and
specify the
Before using any of the request data,
make the following replacements:
If the model returns a response that's too generic, too short, or the model gives a fallback
response, try increasing the temperature. Specify a lower value for less random responses and a higher value for more
random responses. For each token selection step, the top-K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on top-P with
the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more
random responses. Specify a lower value for shorter responses and a higher value for potentially longer
responses.
HTTP method and URL:
Request JSON body:
To send your request, choose one of these options:
Save the request body in a file named
Save the request body in a file named You should receive a JSON response similar to the following. To delete a tuned model: Call the
Before using any of the request data,
make the following replacements:
HTTP method and URL:
To send your request, choose one of these options:
Execute the following command:
Execute the following command:
You should receive a successful status code (2xx) and an empty response. You can configure a model tuning job to collect and report model tuning and
model evaluation metrics, which can then be visualized in
Vertex AI Studio. To view details of a tuned model in the Google Cloud console, go to the
Vertex AI Studio page. In the Tune and Distill table, click the name of the tuned model
that you want to view metrics for. The tuning metrics appear under the Monitor tab. The model tuning job automatically collects the following tuning metrics
for You can configure a model tuning job to collect the following validation metrics
for The metrics visualizations are available after the tuning job starts running.
It will be updated in real time as tuning progresses.
If you don't specify a validation dataset when you create the tuning job, only
the visualizations for the tuning metrics are available. Learn about
deploying a tuned Gemini model. To learn how supervised fine-tuning can be used in a solution that builds a
generative AI knowledge base, see
Jump Start Solution: Generative AI knowledge base. Learn how to fine-tune Generative AI models with Vertex AI Supervised Fine-tuningBefore you begin
Supported models
Create a tuning job
Console
gemini-2.5-flash
.
Google Gen AI SDK
Vertex AI SDK for Python
REST
tuningJobs.create
method. Some of the parameters are not supported by all of the models. Ensure
that you include only the applicable parameters for the model that you're
tuning.
true
to use only the latest checkpoint.projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created. For more information, see Customer-managed encryption keys (CMEK).roles/aiplatform.tuningServiceAgent
role to the service account. Also grant the Tuning Service Agent roles/iam.serviceAccountTokenCreator
role to the customer-managed Service Account.POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs
{
"baseModel": "BASE_MODEL",
"supervisedTuningSpec" : {
"trainingDatasetUri": "TRAINING_DATASET_URI",
"validationDatasetUri": "VALIDATION_DATASET_URI",
"hyperParameters": {
"epochCount": "EPOCH_COUNT",
"adapterSize": "ADAPTER_SIZE",
"learningRateMultiplier": "LEARNING_RATE_MULTIPLIER"
},
"export_last_checkpoint_only": EXPORT_LAST_CHECKPOINT_ONLY,
},
"tunedModelDisplayName": "TUNED_MODEL_DISPLAYNAME",
"encryptionSpec": {
"kmsKeyName": "KMS_KEY_NAME"
},
"serviceAccount": "SERVICE_ACCOUNT"
}
curl
request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs"PowerShell
request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs" | Select-Object -Expand ContentExample curl command
PROJECT_ID=myproject
LOCATION=global
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
"https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/tuningJobs" \
-d \
$'{
"baseModel": "gemini-2.5-flash",
"supervisedTuningSpec" : {
"training_dataset_uri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_train_data.jsonl",
"validation_dataset_uri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_validation_data.jsonl"
},
"tunedModelDisplayName": "tuned_gemini"
}'
Colab Enterprise
PROJECT_ID
: The ID of the project that your
notebook is in.
REGION
: The region that your notebook is in.
TUNED_MODEL_DISPLAY_NAME
: The name of your
tuned model.
source_model
: The Gemini model that
you want to use, for example, gemini-2.0-flash-001
.
train_dataset
: The URL of your training dataset.
validation_dataset
: The URL of your validation dataset.
Tuning hyperparameters
View a list of tuning jobs
tuningJobs
method. Console
Google Gen AI SDK
Vertex AI SDK for Python
REST
tuningJobs.list
method.
GET https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs
curl
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs"PowerShell
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs" | Select-Object -Expand ContentGet details of a tuning job
tuningJobs
method. Console
Google Gen AI SDK
Vertex AI SDK for Python
REST
tuningJobs.get
method and specify the TuningJob_ID
.
GET https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID
curl
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID"PowerShell
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID" | Select-Object -Expand ContentCancel a tuning job
tuningJobs
method. REST
tuningJobs.cancel
method and specify the TuningJob_ID
.
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel
curl
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel"PowerShell
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel" | Select-Object -Expand Content Vertex AI SDK for Python
Console
Evaluate the tuned model
generateContent
method. Console
Google Gen AI SDK
Vertex AI SDK for Python
from vertexai.generative_models import GenerativeModel
sft_tuning_job = sft.SupervisedTuningJob("projects/<PROJECT_ID>/locations/<TUNING_JOB_REGION>/tuningJobs/<TUNING_JOB_ID>")
tuned_model = GenerativeModel(sft_tuning_job.tuned_model_endpoint_name)
print(tuned_model.generate_content(content))
REST
TUNED_ENDPOINT_ID
.
topP
and topK
are applied. Temperature controls the degree of randomness in token selection.
Lower temperatures are good for prompts that require a less open-ended or creative response, while
higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given
prompt are mostly deterministic, but a small amount of variation is still possible.
0.5
, then the model will
select either A or B as the next token by using temperature and excludes C as a
candidate.
1
means the next selected token is the most probable among all
tokens in the model's vocabulary (also called greedy decoding), while a top-K of
3
means that the next token is selected from among the three most
probable tokens by using temperature.
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent
{
"contents": [
{
"role": "USER",
"parts": {
"text" : "Why is sky blue?"
}
}
],
"generation_config": {
"temperature":TEMPERATURE,
"topP": TOP_P,
"topK": TOP_K,
"maxOutputTokens": MAX_OUTPUT_TOKENS
}
}
curl
request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent"PowerShell
request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent" | Select-Object -Expand ContentDelete a tuned model
REST
models.delete
method.
DELETE https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID
curl
curl -X DELETE \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID"PowerShell
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID" | Select-Object -Expand Content Vertex AI SDK for Python
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=LOCATION)
# To find out which models are available in Model Registry
models = aiplatform.Model.list()
model = aiplatform.Model(MODEL_ID)
model.delete()
Tuning and validation metrics
Model tuning metrics
Gemini 2.0 Flash
:
/train_total_loss
: Loss for the tuning dataset at a training step./train_fraction_of_correct_next_step_preds
: The token accuracy at a training
step. A single prediction consists of a sequence of tokens. This metric
measures the accuracy of the predicted tokens when compared to the ground
truth in the tuning dataset./train_num_predictions
: Number of predicted tokens at a training step.Model validation metrics
Gemini 2.0 Flash
:
/eval_total_loss
: Loss for the validation dataset at a validation step./eval_fraction_of_correct_next_step_preds
: The token accuracy at an
validation step. A single prediction consists of a sequence of tokens. This
metric measures the accuracy of the predicted tokens when compared to the
ground truth in the validation dataset./eval_num_predictions
: Number of predicted tokens at a validation step.What's next
Tune Gemini models by using supervised fine-tuning
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-18 UTC.