Evaluating AutoML models

Vertex AI provides model evaluation metrics to help you determine the performance of your models, such as precision and recall metrics. Vertex AI calculates evaluation metrics by using the test set.

How you use model evaluation metrics

Model evaluation metrics provide quantitative measurements of how your model performed on the test set. How you interpret and use those metrics depends on your business need and the problem your model is trained to solve. For example, you might have a lower tolerance for false positives than for false negatives or the other way around. These kinds of questions affect which metrics you would focus on.

For more information about iterating on your model to improve its performance, see Iterating on your model.

Examples

The following scenarios describe how you can use evaluation metrics to determine the performance of your AutoML model.

Privacy in images

Let's say you want to create a system that automatically detects sensitive information from an image and blurs it out. False positives in this case would be things that don't need to be blurred but get blurred anyway such as numbers on a calendar. This is inconvenient but not detrimental.

blurred date graphic

False negatives in this case would be things that failed to get blurred, like a credit card number, which can lead to identity theft.

credit card number graphic

In this case, you might focus on recall. Recall measures, for all predictions made, how many are being left out. Higher recall increases the number of false positives (reduces precision) but decreases the number of false negatives. In other words, the model is likely to include more relevant objects but more irrelevant objects as well.

Spam filtering

Let's say you want to create a system that automatically filters email messages that are spam from messages that are not. A false negative in this case would be a spam email that does not get caught and that you see you see in your inbox. This is a slight inconvenience for users.

spam email graphic

A false positive in this case would be an email that is falsely flagged as spam and gets removed from your inbox. If the email was important, the user might be adversely impacted.

legitimate email graphic

In this case, you would want to focus on precision, which indicates how many correct predictions were made out of the total number of predictions. A high-precision model is likely to label only the most relevant examples, which is useful for cases where your label is common in the training data.

Evaluation metrics returned by Vertex AI

Vertex AI returns several different evaluation metrics such as precision, recall, and confidence thresholds. The metrics that Vertex AI returns depend on your model's objective. For example, Vertex AI provides different evaluation metrics for an image classification model compared to an image object detection model.

A schema file, downloadable from a Cloud Storage location, determines which evaluation metrics Vertex AI provides for each objective. The following tabs provide links to the schema files and describes the evaluation metrics for each model objective.

Image

Classification

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Log loss: The cross-entropy between the model predictions and the target values. This ranges from zero to infinity, where a lower value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.

Object detection

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • IoU threshold: An intersection over union threshold value that determines which predictions to return. A model returns predictions that are at this value or higher. The higher the threshold, the closer the predicted bounding box values must be to the actual bounding box values.
  • Mean average precision: also known as the average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • Bounding box mean average precision: The single metric for bounding box evaluations: the meanAveragePrecision averaged over all boundingBoxMetrics.

Tabular

Classification

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • AuROC: The area under receiver operating characteristic curve. This ranges from zero to one, where a higher value indicates a higher-quality model.
  • Log loss: The cross-entropy between the model predictions and the target values. This ranges from zero to infinity, where a lower value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Recall at 1: The recall (true positive rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • Precision at 1: The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • F1 score at 1: The harmonic mean of recall at 1 and precision at 1.
  • True negative count: The number of times a model correctly predicted a negative class.
  • True positive count: The number of times a model correctly predicted a positive class.
  • False negative count: The number of times a model mistakenly predicted a negative class.
  • False positive count: The number of times a model mistakenly predicted a positive class.
  • False positive rate: The fraction of incorrectly predicted results out of all predicted results.
  • False positive rate at 1: The false positive rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.
  • Model feature attributions: Vertex AI shows you how much each feature impacts a model. The values are provided as a percentage for each feature: the higher the percentage, the more strongly that feature impacted model training. Review this information to ensure that all of the most important features make sense for your data and business problem. For more information, see Vertex Explainable AI.

Forecasting

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • MAE: The mean absolute error (MAE) is the average absolute difference between the target values and the predicted values. This metric ranges from zero to infinity; a lower value indicates a higher quality model.
  • RMSE: The root-mean-squared error is the square root of the average squared difference between the target and predicted values. RMSE is more sensitive to outliers than MAE,so if you're concerned about large errors, then RMSE can be a more useful metric to evaluate. Similar to MAE, a smaller value indicates a higher quality model (0 represents a perfect predictor).
  • RMSLE: The root-mean-squared logarithmic error metric is similar to RMSE, except that it uses the natural logarithm of the predicted and actual values plus 1. RMSLE penalizes under-prediction more heavily than over-prediction. It can also be a good metric when you don't want to penalize differences for large prediction values more heavily than for small prediction values. This metric ranges from zero to infinity; a lower value indicates a higher quality model. The RMSLE evaluation metric is returned only if all label and predicted values are non-negative.
  • R^2: R squared (R^2), also known as the coefficient of determination, is the square of the Pearson correlation coefficient between the labels and predicted values. This metric ranges between zero and one; a higher value indicates a higher quality model.
  • MAPE: Mean absolute percentage error (MAPE) is the average absolute percentage difference between the labels and the predicted values. This metric ranges between zero and infinity; a lower value indicates a higher quality model.
    MAPE is not shown if the target column contains any 0 values. In this case, MAPE is undefined.
  • WAPE: Weighted absolute percentage error (WAPE) is the overall difference between the value predicted by a model and the values observed over the values observed. Compared to RMSE, WAPE is weighted towards the overall differences rather than individual differences, which can be highly influenced by low or intermittent values. A lower value indicates a higher quality model.
  • RMSPE: Root mean squared percentage error (RMPSE) shows RMSE as a percentage of the actual values instead of an absolute number. A lower value indicates a higher quality model.
  • Quantile: The percent quantile, which indicates the probability that an observed value will be below the predicted value. For example, at the 0.2 quantile, the observed values are expected to be lower than the predicted values 20% of the time. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.
  • Observed quantile: Shows the percentage of true values that were less than the predicted value for a given quantile. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.
  • Scaled pinball loss: The scaled pinball loss at a particular quantile. A lower value indicates a higher quality model at the given quantile. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.

Regression

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • MAE: The mean absolute error (MAE) is the average absolute difference between the target values and the predicted values. This metric ranges from zero to infinity; a lower value indicates a higher quality model.
  • RMSE: The root-mean-squared error is the square root of the average squared difference between the target and predicted values. RMSE is more sensitive to outliers than MAE,so if you're concerned about large errors, then RMSE can be a more useful metric to evaluate. Similar to MAE, a smaller value indicates a higher quality model (0 represents a perfect predictor).
  • RMSLE: The root-mean-squared logarithmic error metric is similar to RMSE, except that it uses the natural logarithm of the predicted and actual values plus 1. RMSLE penalizes under-prediction more heavily than over-prediction. It can also be a good metric when you don't want to penalize differences for large prediction values more heavily than for small prediction values. This metric ranges from zero to infinity; a lower value indicates a higher quality model. The RMSLE evaluation metric is returned only if all label and predicted values are non-negative.
  • R^2: R squared (R^2), also known as the coefficient of determination, is the square of the Pearson correlation coefficient between the labels and predicted values. This metric ranges between zero and one; a higher value indicates a higher quality model.
  • MAPE: Mean absolute percentage error (MAPE) is the average absolute percentage difference between the labels and the predicted values. This metric ranges between zero and infinity; a lower value indicates a higher quality model.
    MAPE is not shown if the target column contains any 0 values. In this case, MAPE is undefined.
  • Model feature attributions: Vertex AI shows you how much each feature impacts a model. The values are provided as a percentage for each feature: the higher the percentage, the more strongly that feature impacted model training. Review this information to ensure that all of the most important features make sense for your data and business problem. For more information, see Vertex Explainable AI.

Text

Classification

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Log loss: The cross-entropy between the model predictions and the target values. This ranges from zero to infinity, where a lower value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Recall at 1: The recall (true positive rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • Precision at 1: The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • F1 score at 1: The harmonic mean of recall at 1 and precision at 1.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.

Entity extraction

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.

Sentiment analysis

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • MAE: The mean absolute error (MAE) is the average absolute difference between the target values and the predicted values. This metric ranges from zero to infinity; a lower value indicates a higher quality model.
  • MSE: The mean squared error (MSE) measures the differences between the values predicted by a model or an estimator and the values observed. Lower values indicate more accurate models.
  • Linear-weighted kappa and Quadratic-weighted kappa: measure how closely the sentiment values assigned by the model agree with values assigned by human raters. Higher values indicate more accurate models.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.

Video

Action recognition

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Precision window length: Timestamps for predictions must be within this window length to be counted as a true positive. The center of the precision window length is the ground truth action's timestamp with this specific length. The value is expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision.
  • Mean average precision: also known as the average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.

Classification

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • Confusion matrix: A confusion matrix shows how often a model correctly predicted a result. For incorrectly predicted results, the matrix shows what the model predicted instead. The confusion matrix helps you understand where your model is "confusing" two results.

Object tracking

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

  • AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • IoU threshold: An intersection over union threshold value that determines which predictions to return. A model returns predictions that are at this value or higher. The higher the threshold, the closer the predicted bounding box values must be to the actual bounding box values.
  • Mean average precision: also known as the average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
  • Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
  • Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
  • Precision: The fraction of classification predictions produced by the model that were correct.
  • F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
  • Bounding box mean average precision: The single metric for bounding box evaluations: the meanAveragePrecision averaged over all boundingBoxMetrics.

Getting evaluation metrics

You can get an aggregate set of evaluation metrics for your model and, for some objectives, evaluation metrics for a particular class or label. Evaluation metrics for a particular class or label is also known as an evaluation slice. The following content describes how to get aggregate evaluation metrics and evaluation slices by using the Google Cloud Console or API.

Console

  1. In the Google Cloud Console, in the Vertex AI section, go to the Models page.

    Go to the Models page

  2. In the Region drop-down, select the region where your model located.

  3. From the list of models, click your model, which opens the model's Evaluate tab.

    In the Evaluate tab, you can view your model's aggregate evaluation metrics, such as the Average precision and Recall.

    If the model objective has evaluation slices, the console shows a list of labels. You can click a label to view evaluation metrics for that label, as shown in the following example:

    label selection in console

API

API requests for getting evaluation metrics is the same for each data type and objective, but the outputs are different. The following samples show the same request but different responses.

Getting aggregate model evaluation metrics

The aggregate model evaluation metrics provide information about the model as a whole. To see information about a specific slice, list the model evaluation slices.

To view aggregate model evaluation metrics, use the projects.locations.models.evaluations.get method.

Image

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationImageClassificationSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationImageClassificationSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationImageClassificationSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Image Classification Response");
      System.out.format("Model Name: %s\n", modelEvaluation.getName());
      System.out.format("Metrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("Metrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("Create Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("Slice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationImageClassification() {
  // Configure the name resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Create get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation image classification response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);

  const modelExplanation = response.modelExplanation;
  if (modelExplanation === null) {
    console.log(`\tModel explanation: ${JSON.stringify(modelExplanation)}`);
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    for (const meanAttribution of meanAttributions) {
      console.log('\t\tMean attribution');
      console.log(
        `\t\t\tBaseline output value : \
           ${meanAttribution.baselineOutputValue}`
      );
      console.log(
        `\t\t\tInstance output value : \
           ${meanAttribution.instanceOutputValue}`
      );
      console.log(
        `\t\t\tFeature attributions : \
           ${JSON.stringify(meanAttribution.featureAttributions)}`
      );
      console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
      console.log(
        `\t\t\tOutput display name : \
           ${meanAttribution.outputDisplayName}`
      );
      console.log(
        `\t\t\tApproximation error : \
           ${meanAttribution.approximationError}`
      );
    }
  }
}
getModelEvaluationImageClassification();

Python

from google.cloud import aiplatform


def get_model_evaluation_image_classification_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Object detection

For the bounding box metric, Vertex AI returns an array of metric values at different IoU threshold values (between 0 and 1) and confidence threshold values (between 0 and 1). For example, you can narrow in on evaluation metrics at an IoU threshold of 0.85 and a confidence threshold of 0.8228. By viewing these different threshold values, you can see how they affect other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationImageObjectDetectionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationImageObjectDetectionSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationImageObjectDetectionSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Image Object Detection Response");
      System.out.format("\tName: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationImageObjectDetection() {
  // Configure the name resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Create get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation image object detection response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (modelExplanation === null) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (meanAttributions === null) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${meanAttribution.featureAttributions}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationImageObjectDetection();

Python

from google.cloud import aiplatform


def get_model_evaluation_image_object_detection_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Tabular

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationTabularClassificationSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationTabularClassification(project, modelId, evaluationId);
  }

  static void getModelEvaluationTabularClassification(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Tabular Classification Response");
      System.out.format("\tName: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationTabularClassification() {
  // Configure the parent resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation tabular classification response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (!modelExplanation) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (!meanAttributions) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${JSON.stringify(meanAttribution.featureAttributions)}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationTabularClassification();

Python

from google.cloud import aiplatform


def get_model_evaluation_tabular_classification_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Forecasting

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Regression

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationTabularRegressionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationTabularRegression(project, modelId, evaluationId);
  }

  static void getModelEvaluationTabularRegression(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Tabular Regression Response");
      System.out.format("\tName: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationTabularRegression() {
  // Configure the parent resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation tabular regression response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (!modelExplanation) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (!meanAttributions) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${JSON.stringify(meanAttribution.featureAttributions)}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationTabularRegression();

Python

from google.cloud import aiplatform


def get_model_evaluation_tabular_regression_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Text

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationTextClassificationSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";

    getModelEvaluationTextClassificationSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationTextClassificationSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";

      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Text Classification Response");
      System.out.format("\tModel Name: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationTextClassification() {
  // Configure the resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation text classification response :');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (modelExplanation === null) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (meanAttributions === null) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${JSON.stringify(meanAttribution.featureAttributions)}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationTextClassification();

Python

from google.cloud import aiplatform


def get_model_evaluation_text_classification_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Entity extraction

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationTextEntityExtractionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";

    getModelEvaluationTextEntityExtractionSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationTextEntityExtractionSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";

      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Text Entity Extraction Response");
      System.out.format("\tModel Name: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationTextEntityExtraction() {
  // Configure the resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation text entity extraction response :');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (modelExplanation === null) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (meanAttributions === null) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${JSON.stringify(meanAttribution.featureAttributions)}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationTextEntityExtraction();

Python

from google.cloud import aiplatform


def get_model_evaluation_text_entity_extraction_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Sentiment analysis

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationTextSentimentAnalysisSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";

    getModelEvaluationTextSentimentAnalysisSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationTextSentimentAnalysisSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";

      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Text Sentiment Analysis Response");
      System.out.format("\tModel Name: %s\n", modelEvaluation.getName());
      System.out.format("\tMetrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("\tMetrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("\tCreate Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("\tSlice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationTextSentimentAnalysis() {
  // Configure the resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation text sentiment analysis response :');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);

  const modelExplanation = response.modelExplanation;
  console.log('\tModel explanation');
  if (modelExplanation === null) {
    console.log('\t\t{}');
  } else {
    const meanAttributions = modelExplanation.meanAttributions;
    if (meanAttributions === null) {
      console.log('\t\t\t []');
    } else {
      for (const meanAttribution of meanAttributions) {
        console.log('\t\tMean attribution');
        console.log(
          `\t\t\tBaseline output value : \
            ${meanAttribution.baselineOutputValue}`
        );
        console.log(
          `\t\t\tInstance output value : \
            ${meanAttribution.instanceOutputValue}`
        );
        console.log(
          `\t\t\tFeature attributions : \
            ${JSON.stringify(meanAttribution.featureAttributions)}`
        );
        console.log(`\t\t\tOutput index : ${meanAttribution.outputIndex}`);
        console.log(
          `\t\t\tOutput display name : \
            ${meanAttribution.outputDisplayName}`
        );
        console.log(
          `\t\t\tApproximation error : \
            ${meanAttribution.approximationError}`
        );
      }
    }
  }
}
getModelEvaluationTextSentimentAnalysis();

Python

from google.cloud import aiplatform


def get_model_evaluation_text_sentiment_analysis_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Video

Select the tab below for your objective:

Action recognition

Vertex AI returns an array of video action recognition metrics. Each element shows evaluation metrics at different precisionWindowLength and confidenceThreshold values. By viewing evaluation metrics at different window length and confidence threshold values, you can see how they affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java

import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationVideoActionRecognitionSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "PROJECT";
    String modelId = "MODEL_ID";
    String evaluationId = "EVALUATION_ID";
    getModelEvaluationVideoActionRecognitionSample(project, modelId, evaluationId);
  }

  static void getModelEvaluationVideoActionRecognitionSample(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings settings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();
    String location = "us-central1";

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient client = ModelServiceClient.create(settings)) {
      ModelEvaluationName name = ModelEvaluationName.of(project, location, modelId, evaluationId);
      ModelEvaluation response = client.getModelEvaluation(name);
      System.out.format("response: %s\n", response);
    }
  }
}

Python

from google.cloud import aiplatform


def get_model_evaluation_video_action_recognition_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationVideoClassificationSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationVideoClassification(project, modelId, evaluationId);
  }

  static void getModelEvaluationVideoClassification(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Video Classification Response");
      System.out.format("Name: %s\n", modelEvaluation.getName());
      System.out.format("Metrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("Metrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("Create Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("Slice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationVideoClassification() {
  // Configure the parent resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Create get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation video classification response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);
}
getModelEvaluationVideoClassification();

Python

from google.cloud import aiplatform


def get_model_evaluation_video_classification_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Object tracking

For the bounding box metric, Vertex AI returns an array of metric values at different IoU threshold values (between 0 and 1) and confidence threshold values (between 0 and 1). For example, you can narrow in on evaluation metrics at an IoU threshold of 0.85 and a confidence threshold of 0.8228. By viewing these different threshold values, you can see how they affect other metrics such as precision and recall.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluation;
import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationVideoObjectTrackingSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    getModelEvaluationVideoObjectTracking(project, modelId, evaluationId);
  }

  static void getModelEvaluationVideoObjectTracking(
      String project, String modelId, String evaluationId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      ModelEvaluation modelEvaluation = modelServiceClient.getModelEvaluation(modelEvaluationName);

      System.out.println("Get Model Evaluation Video Object Tracking Response");
      System.out.format("Name: %s\n", modelEvaluation.getName());
      System.out.format("Metrics Schema Uri: %s\n", modelEvaluation.getMetricsSchemaUri());
      System.out.format("Metrics: %s\n", modelEvaluation.getMetrics());
      System.out.format("Create Time: %s\n", modelEvaluation.getCreateTime());
      System.out.format("Slice Dimensions: %s\n", modelEvaluation.getSliceDimensionsList());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationVideoObjectTracking() {
  // Configure the parent resources
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    name,
  };

  // Create get model evaluation request
  const [response] = await modelServiceClient.getModelEvaluation(request);

  console.log('Get model evaluation video object tracking response');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics schema uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);
  console.log(`\tSlice dimensions : ${response.sliceDimensions}`);
}
getModelEvaluationVideoObjectTracking();

Python

from google.cloud import aiplatform


def get_model_evaluation_video_object_tracking_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.get_model_evaluation(name=name)
    print("response:", response)

Listing all evaluation slices

The projects.locations.models.evaluations.slices.list method lists all evaluation slices for your model. You must have the model's evaluation ID, which you can get when you view the aggregated evaluation metrics.

You can use model evaluation slices to determine how the model performed on a specific label. The value field tells you which label the metrics are for.

Image

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Object detection

For the bounding box metric, Vertex AI returns an array of metric values at different IoU threshold values (between 0 and 1) and confidence threshold values (between 0 and 1). For example, you can narrow in on evaluation metrics at an IoU threshold of 0.85 and a confidence threshold of 0.8228. By viewing these different threshold values, you can see how they affect other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Tabular

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Forecasting

Tabular forecasting models do not have evaluation metric slices.

Regression

Tabular regression models do not have evaluation metric slices.

Text

Select the tab below for your objective:

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Entity extraction

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Sentiment analysis

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Video

Select the tab below for your objective:

Action recognition

Vertex AI returns an array of video action recognition metrics. Each element shows evaluation metrics at different precisionWindowLength and confidenceThreshold values. By viewing evaluation metrics at different window length and confidence threshold values, you can see how they affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Classification

Vertex AI returns an array of confidence metrics. Each element shows evaluation metrics at a different confidenceThreshold value (starting from 0 and going up to 1). By viewing different threshold values, you can see how the threshold affects other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Object tracking

For the bounding box metric, Vertex AI returns an array of metric values at different IoU threshold values (between 0 and 1) and confidence threshold values (between 0 and 1). For example, you can narrow in on evaluation metrics at an IoU threshold of 0.85 and a confidence threshold of 0.8228. By viewing these different threshold values, you can see how they affect other metrics such as precision and recall.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationName;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class ListModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    listModelEvaluationSliceSample(project, modelId, evaluationId);
  }

  static void listModelEvaluationSliceSample(String project, String modelId, String evaluationId)
      throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationName modelEvaluationName =
          ModelEvaluationName.of(project, location, modelId, evaluationId);

      for (ModelEvaluationSlice modelEvaluationSlice :
          modelServiceClient.listModelEvaluationSlices(modelEvaluationName).iterateAll()) {
        System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
        System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
        System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
        System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

        Slice slice = modelEvaluationSlice.getSlice();
        System.out.format("Slice Dimensions: %s\n", slice.getDimension());
        System.out.format("Slice Value: %s\n\n", slice.getValue());
      }
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
 * (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service Client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};

// Instantiates a client
const modelServiceClient = new ModelServiceClient(clientOptions);

async function listModelEvaluationSlices() {
  // Configure the parent resources
  const parent = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}`;
  const request = {
    parent,
  };

  // Get and print out a list of all the evaluation slices for this resource
  const [response] = await modelServiceClient.listModelEvaluationSlices(
    request
  );
  console.log('List model evaluation response', response);
  console.log(response);
}
listModelEvaluationSlices();

Python

from google.cloud import aiplatform


def list_model_evaluation_slices_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    parent = client.model_evaluation_path(
        project=project, location=location, model=model_id, evaluation=evaluation_id
    )
    response = client.list_model_evaluation_slices(parent=parent)
    for model_evaluation_slice in response:
        print("model_evaluation_slice:", model_evaluation_slice)

Getting metrics for a single slice

To view evaluation metrics for a single slice, use the projects.locations.models.evaluations.slices.get method. You must have the slice ID, which is provided when you list all slices. The following sample applies to all data types and objectives.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID or project number.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slice to retrieve.
  • SLICE_ID: ID of an evaluation slice to get.
  • PROJECT_NUMBER: Project number for your project (appears in the response).
  • EVALUATION_METRIC_SCHEMA_FILE_NAME: The name of a schema file that defines the evaluation metrics to return such as classification_metrics_1.0.0.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java


import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSliceName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    String sliceId = "YOUR_SLICE_ID";
    getModelEvaluationSliceSample(project, modelId, evaluationId, sliceId);
  }

  static void getModelEvaluationSliceSample(
      String project, String modelId, String evaluationId, String sliceId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationSliceName modelEvaluationSliceName =
          ModelEvaluationSliceName.of(project, location, modelId, evaluationId, sliceId);

      ModelEvaluationSlice modelEvaluationSlice =
          modelServiceClient.getModelEvaluationSlice(modelEvaluationSliceName);

      System.out.println("Get Model Evaluation Slice Response");
      System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
      System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
      System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
      System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

      Slice slice = modelEvaluationSlice.getSlice();
      System.out.format("Slice Dimensions: %s\n", slice.getDimension());
      System.out.format("Slice Value: %s\n", slice.getValue());
    }
  }
}

Node.js

/**
 * TODO(developer): Uncomment these variables before running the sample.\
   (Not necessary if passing values as arguments)
 */

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const sliceId = 'YOUR_SLICE_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');
// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};
// Specifies the location of the api endpoint
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationSlice() {
  // Configure the parent resource
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}/slices/${sliceId}`;
  const request = {
    name,
  };

  // Get and print out a list of all the endpoints for this resource
  const [response] = await modelServiceClient.getModelEvaluationSlice(
    request
  );

  console.log('Get model evaluation slice');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics_Schema_Uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);

  console.log('Slice');
  const slice = response.slice;
  console.log(`\tDimension :${slice.dimension}`);
  console.log(`\tValue :${slice.value}`);
}
getModelEvaluationSlice();

Python

from google.cloud import aiplatform


def get_model_evaluation_slice_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    slice_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_slice_path(
        project=project,
        location=location,
        model=model_id,
        evaluation=evaluation_id,
        slice=slice_id,
    )
    response = client.get_model_evaluation_slice(name=name)
    print("response:", response)

Iterate on your model

Model evaluation metrics provide a starting point for debugging your model when the model isn't meeting your expectations. For example, low precision and recall scores can indicate that your model needs additional training data or has inconsistent labels. Perfect precision and recall can indicate that the test data is too easy to predict and might not generalize well.

You can iterate on your training data and create a new model. After you create a new model, you can compare the evaluation metrics between the existing model and the new model.

The following suggestions can help you improve models that label items, such as classification or detection models:

  • Consider adding more examples or a wider range of examples in your training data. For example, for an image classification model, you might include wider angle images, higher or lower resolution images, or different points of view. For more guidance, see Preparing data for your particular data type and objective.
  • Consider removing classes or labels that don't have a lot of examples. Insufficient examples prevent the model from consistently and confidently making predictions about those classes or labels.
  • Machines can't interpret the name of your classes or labels and don't understand the nuances between them, such as "door" and "door_with_knob." You must provide data to help machines recognize such nuances.
  • Augment your data with more examples of true positives and true negatives, especially examples that are close to a decision boundary to mitigate model confusion.
  • Specify your own data split (training, validation, and test). Vertex AI randomly assigns items to each set. Therefore, near-duplicates can be allocated in the training and validation sets, which could lead to overfitting and then poor performance on the test set. For more information about setting your own data split, see About data splits for AutoML models.
  • If your model's evaluation metrics include a confusion matrix, you can see if the model is confusing two labels, where the model is predicting a particular label significantly more than the true label. Review your data and make sure the examples are correctly labeled.
  • If you had a short training time (low maximum number of node hours), you might get a higher-quality model by allowing it to train for a longer period of time (higher maximum number of node hours).

Tabular data

In addition to the previous suggestions, you can try the following tabular-specific suggestions to iterate on your model. For models with poor performance, try the following suggestions:

  • Exclude any columns from training that aren't predictive, such as ID columns.
  • Review your transformations and check that all of your columns are set to the correct transformation. Learn more.
  • Review your data and check that it doesn't have too many errors. For example, missing values in columns that do not accept null values cause that row to be ignored or imputed. Learn more.
  • Export the test dataset and examine it. Analyze when the model is making incorrect predictions to determine if you need more training data for a particular outcome or if your training data introduced leakage.
  • For forecasting models, consider the following actions:
    • Increase the context window, which increases how far back the model looks for predictive patterns. For more information, see Considerations for setting the context window and forecast horizon.
    • In your training options, check that time-dependent columns whose values are unavailable or available at prediction time are appropriately labeled.
    • In your training data, add more data that contains several examples of the behaviors that you want to model. For example, you can train a model that correlates different time series, such as cannibalization effects, where a new product decreases demand of an existing product, or halo effects, where a new product increases demand of related products.

For models with perfect performance, try the following suggestions:

  • Check for target leakage. Target leakage happens when a feature is included in the training data that cannot be known at training time, and which is based on the outcome. For example, if you included a Frequent Buyer number for a model trained to decide whether a first-time user would make a purchase, that model would have very high evaluation metrics, but would perform poorly on real data, because the Frequent Buyer number could not be included.

    To check for target leakage, review the Feature importance graph on the Evaluate tab for your model. Make sure the columns with high importance are truly predictive and are not leaking information about the target.

  • For classification and regression models, if the time of your data matters, make sure you used a Time column or a manual split based on time. Not doing so can skew your evaluation metrics. For more information, see Time column on the preparing tabular training data page.

What's next