Evaluate AutoML forecast models

This page shows you how to evaluate your AutoML forecast models.

Vertex AI provides model evaluation metrics to help you determine the performance of your models, such as precision and recall metrics. Vertex AI calculates evaluation metrics by using the test set.

Before you begin

Before you can create a forecast, you must train a forecast model.

How you use model evaluation metrics

Model evaluation metrics provide quantitative measurements of how your model performed on the test set. How you interpret and use those metrics depends on your business need and the problem your model is trained to solve. For example, you might have a lower tolerance for false positives than for false negatives or the other way around. These kinds of questions affect which metrics you would focus on.

Evaluation metrics returned by Vertex AI

Vertex AI returns several different evaluation metrics, such as precision, recall, and confidence thresholds. The metrics that Vertex AI returns depend on your model's objective. For example, Vertex AI provides different evaluation metrics for an image classification model compared to an image object detection model.

A schema file determines which evaluation metrics Vertex AI provides for each objective.

You can view and download schema files from the following Cloud Storage location:
gs://google-cloud-aiplatform/schema/modelevaluation/

The evaluation metrics for forecasting models are:

  • MAE: The mean absolute error (MAE) is the average absolute difference between the target values and the predicted values. This metric ranges from zero to infinity; a lower value indicates a higher quality model.
  • MAPE: Mean absolute percentage error (MAPE) is the average absolute percentage difference between the labels and the predicted values. This metric ranges between zero and infinity; a lower value indicates a higher quality model.
    MAPE is not shown if the target column contains any 0 values. In this case, MAPE is undefined.
  • RMSE: The root-mean-squared error is the square root of the average squared difference between the target and predicted values. RMSE is more sensitive to outliers than MAE,so if you're concerned about large errors, then RMSE can be a more useful metric to evaluate. Similar to MAE, a smaller value indicates a higher quality model (0 represents a perfect predictor).
  • RMSLE: The root-mean-squared logarithmic error metric is similar to RMSE, except that it uses the natural logarithm of the predicted and actual values plus 1. RMSLE penalizes under-prediction more heavily than over-prediction. It can also be a good metric when you don't want to penalize differences for large prediction values more heavily than for small prediction values. This metric ranges from zero to infinity; a lower value indicates a higher quality model. The RMSLE evaluation metric is returned only if all label and predicted values are non-negative.
  • r^2: r squared (r^2) is the square of the Pearson correlation coefficient between the labels and predicted values. This metric ranges between zero and one; a higher value indicates a higher quality model.
  • Quantile: The percent quantile, which indicates the probability that an observed value will be below the predicted value. For example, at the 0.2 quantile, the observed values are expected to be lower than the predicted values 20% of the time. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.
  • Observed quantile: Shows the percentage of true values that were less than the predicted value for a given quantile. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.
  • Scaled pinball loss: The scaled pinball loss at a particular quantile. A lower value indicates a higher quality model at the given quantile. Vertex AI provides this metric if you specify minimize-quantile-loss for the optimization objective.

Getting evaluation metrics

You can get an aggregate set of evaluation metrics for your model and, for some objectives, evaluation metrics for a particular class or label. Evaluation metrics for a particular class or label is also known as an evaluation slice. The following content describes how to get aggregate evaluation metrics and evaluation slices by using the Google Cloud console or API.

Cloud console

  1. In the Google Cloud console, in the Vertex AI section, go to the Models page.

    Go to the Models page

  2. In the Region drop-down, select the region where your model is located.

  3. From the list of models, select your model.

  4. Select your model's version number.

  5. In the Evaluate tab, you can view your model's aggregate evaluation metrics, such as the Average precision and Recall.

    If the model objective has evaluation slices, the console shows a list of labels. You can click a label to view evaluation metrics for that label, as shown in the following example:

    label selection in console

API

API requests for getting evaluation metrics is the same for each data type and objective, but the outputs are different. The following samples show the same request but different responses.

Getting aggregate model evaluation metrics

The aggregate model evaluation metrics provide information about the model as a whole. To see information about a specific slice, list the model evaluation slices.

To view aggregate model evaluation metrics, use the projects.locations.models.evaluations.get method.

Select a tab that corresponds to your language or environment:

REST & CMD LINE

Before using any of the request data, make the following replacements:

  • LOCATION: Region where your model is stored.
  • PROJECT: Your project ID.
  • MODEL_ID: The ID of your model.
  • PROJECT_NUMBER: Project number for your project.
  • EVALUATION_ID: ID for the model evaluation (appears in the response).

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations"

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Getting metrics for a single slice

To view evaluation metrics for a single slice, use the projects.locations.models.evaluations.slices.get method. You must have the slice ID, which is provided when you list all slices. The following sample applies to all data types and objectives.

REST & CMD LINE

Before using any of the request data, make the following replacements:

  • LOCATION: Region where Model is located. For example, us-central1.
  • PROJECT: Your project ID.
  • MODEL_ID: The ID of your model.
  • EVALUATION_ID: ID of the model evaluation that contains the evaluation slice to retrieve.
  • SLICE_ID: ID of an evaluation slice to get.
  • PROJECT_NUMBER: Project number for your project.
  • EVALUATION_METRIC_SCHEMA_FILE_NAME: The name of a schema file that defines the evaluation metrics to return such as classification_metrics_1.0.0.

HTTP method and URL:

GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID

To send your request, choose one of these options:

curl

Execute the following command:

curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID"

PowerShell

Execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices/SLICE_ID" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

Java

To learn how to install and use the client library for Vertex AI, see Vertex AI client libraries. For more information, see the Vertex AI Java API reference documentation.


import com.google.cloud.aiplatform.v1.ModelEvaluationSlice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSlice.Slice;
import com.google.cloud.aiplatform.v1.ModelEvaluationSliceName;
import com.google.cloud.aiplatform.v1.ModelServiceClient;
import com.google.cloud.aiplatform.v1.ModelServiceSettings;
import java.io.IOException;

public class GetModelEvaluationSliceSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    // To obtain evaluationId run the code block below after setting modelServiceSettings.
    //
    // try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings))
    // {
    //   String location = "us-central1";
    //   ModelName modelFullId = ModelName.of(project, location, modelId);
    //   ListModelEvaluationsRequest modelEvaluationsrequest =
    //   ListModelEvaluationsRequest.newBuilder().setParent(modelFullId.toString()).build();
    //   for (ModelEvaluation modelEvaluation :
    //     modelServiceClient.listModelEvaluations(modelEvaluationsrequest).iterateAll()) {
    //       System.out.format("Model Evaluation Name: %s%n", modelEvaluation.getName());
    //   }
    // }
    String project = "YOUR_PROJECT_ID";
    String modelId = "YOUR_MODEL_ID";
    String evaluationId = "YOUR_EVALUATION_ID";
    String sliceId = "YOUR_SLICE_ID";
    getModelEvaluationSliceSample(project, modelId, evaluationId, sliceId);
  }

  static void getModelEvaluationSliceSample(
      String project, String modelId, String evaluationId, String sliceId) throws IOException {
    ModelServiceSettings modelServiceSettings =
        ModelServiceSettings.newBuilder()
            .setEndpoint("us-central1-aiplatform.googleapis.com:443")
            .build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests. After completing all of your requests, call
    // the "close" method on the client to safely clean up any remaining background resources.
    try (ModelServiceClient modelServiceClient = ModelServiceClient.create(modelServiceSettings)) {
      String location = "us-central1";
      ModelEvaluationSliceName modelEvaluationSliceName =
          ModelEvaluationSliceName.of(project, location, modelId, evaluationId, sliceId);

      ModelEvaluationSlice modelEvaluationSlice =
          modelServiceClient.getModelEvaluationSlice(modelEvaluationSliceName);

      System.out.println("Get Model Evaluation Slice Response");
      System.out.format("Model Evaluation Slice Name: %s\n", modelEvaluationSlice.getName());
      System.out.format("Metrics Schema Uri: %s\n", modelEvaluationSlice.getMetricsSchemaUri());
      System.out.format("Metrics: %s\n", modelEvaluationSlice.getMetrics());
      System.out.format("Create Time: %s\n", modelEvaluationSlice.getCreateTime());

      Slice slice = modelEvaluationSlice.getSlice();
      System.out.format("Slice Dimensions: %s\n", slice.getDimension());
      System.out.format("Slice Value: %s\n", slice.getValue());
    }
  }
}

Node.js

To learn how to install and use the client library for Vertex AI, see Vertex AI client libraries. For more information, see the Vertex AI Node.js API reference documentation.

/**
 * TODO(developer): Uncomment these variables before running the sample
 * (not necessary if passing values as arguments). To obtain evaluationId,
 * instantiate the client and run the following the commands.
 */
// const parentName = `projects/${project}/locations/${location}/models/${modelId}`;
// const evalRequest = {
//   parent: parentName
// };
// const [evalResponse] = await modelServiceClient.listModelEvaluations(evalRequest);
// console.log(evalResponse);

// const modelId = 'YOUR_MODEL_ID';
// const evaluationId = 'YOUR_EVALUATION_ID';
// const sliceId = 'YOUR_SLICE_ID';
// const project = 'YOUR_PROJECT_ID';
// const location = 'YOUR_PROJECT_LOCATION';

// Imports the Google Cloud Model Service client library
const {ModelServiceClient} = require('@google-cloud/aiplatform');
// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: 'us-central1-aiplatform.googleapis.com',
};
// Specifies the location of the api endpoint
const modelServiceClient = new ModelServiceClient(clientOptions);

async function getModelEvaluationSlice() {
  // Configure the parent resource
  const name = `projects/${project}/locations/${location}/models/${modelId}/evaluations/${evaluationId}/slices/${sliceId}`;
  const request = {
    name,
  };

  // Get and print out a list of all the endpoints for this resource
  const [response] = await modelServiceClient.getModelEvaluationSlice(
    request
  );

  console.log('Get model evaluation slice');
  console.log(`\tName : ${response.name}`);
  console.log(`\tMetrics_Schema_Uri : ${response.metricsSchemaUri}`);
  console.log(`\tMetrics : ${JSON.stringify(response.metrics)}`);
  console.log(`\tCreate time : ${JSON.stringify(response.createTime)}`);

  console.log('Slice');
  const slice = response.slice;
  console.log(`\tDimension :${slice.dimension}`);
  console.log(`\tValue :${slice.value}`);
}
getModelEvaluationSlice();

Python

To learn how to install and use the client library for Vertex AI, see Vertex AI client libraries. For more information, see the Vertex AI Python API reference documentation.

from google.cloud import aiplatform


def get_model_evaluation_slice_sample(
    project: str,
    model_id: str,
    evaluation_id: str,
    slice_id: str,
    location: str = "us-central1",
    api_endpoint: str = "us-central1-aiplatform.googleapis.com",
):
    """
    To obtain evaluation_id run the following commands where LOCATION
    is the region where the model is stored, PROJECT is the project ID,
    and MODEL_ID is the ID of your model.

    model_client = aiplatform.gapic.ModelServiceClient(
        client_options={
            'api_endpoint':'LOCATION-aiplatform.googleapis.com'
            }
        )
    evaluations = model_client.list_model_evaluations(parent='projects/PROJECT/locations/LOCATION/models/MODEL_ID')
    print("evaluations:", evaluations)
    """
    # The AI Platform services require regional API endpoints.
    client_options = {"api_endpoint": api_endpoint}
    # Initialize client that will be used to create and send requests.
    # This client only needs to be created once, and can be reused for multiple requests.
    client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
    name = client.model_evaluation_slice_path(
        project=project,
        location=location,
        model=model_id,
        evaluation=evaluation_id,
        slice=slice_id,
    )
    response = client.get_model_evaluation_slice(name=name)
    print("response:", response)

What's next