This page shows you how to evaluate your AutoML video action recognition models so that you can iterate on your models.
Vertex AI provides model evaluation metrics to help you determine the performance of your models, such as precision and recall metrics. Vertex AI calculates evaluation metrics by using the test set.
How you use model evaluation metrics
Model evaluation metrics provide quantitative measurements of how your model performed on the test set. How you interpret and use those metrics depends on your business need and the problem your model is trained to solve. For example, you might have a lower tolerance for false positives than for false negatives or the other way around. These kinds of questions affect which metrics you would focus on.
For more information about iterating on your model to improve its performance, see Iterating on your model.
Evaluation metrics returned by Vertex AI
Vertex AI returns several different evaluation metrics such as precision, recall, and confidence thresholds. The metrics that Vertex AI returns depend on your model's objective. For example, Vertex AI provides different evaluation metrics for an image classification model compared to an image object detection model.
A schema file, downloadable from a Cloud Storage location, determines which evaluation metrics Vertex AI provides for each objective. The following schema file describes the evaluation metrics for video action recognition.
You can view and download schema files from the following Cloud Storage
location:
gs://google-cloud-aiplatform/schema/modelevaluation/
- AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
- Precision window length: Timestamps for predictions must be within this window length to be counted as a true positive. The center of the precision window length is the ground truth action's timestamp with this specific length. The value is expressed as a number of seconds as measured from the start of the video, with "s" appended at the end. Fractions are allowed, up to a microsecond precision.
- Mean average precision: also known as the average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
- Confidence threshold: A confidence score that determines which predictions to return. A model returns predictions that are at this value or higher. A higher confidence threshold increases precision but lowers recall. Vertex AI returns confidence metrics at different threshold values to show how the threshold affects precision and recall.
- Recall: The fraction of predictions with this class that the model correctly predicted. Also called true positive rate.
- Precision: The fraction of classification predictions produced by the model that were correct.
- F1 score: The harmonic mean of precision and recall. F1 is a useful metric if you're looking for a balance between precision and recall and there's an uneven class distribution.
Getting evaluation metrics
You can get an aggregate set of evaluation metrics for your model and, for some objectives, evaluation metrics for a particular class or label. Evaluation metrics for a particular class or label is also known as an evaluation slice. The following content describes how to get aggregate evaluation metrics and evaluation slices by using the Google Cloud console or API.
Google Cloud console
In the Google Cloud console, in the Vertex AI section, go to the Models page.
In the Region drop-down, select the region where your model is located.
From the list of models, click your model, which opens the model's Evaluate tab.
In the Evaluate tab, you can view your model's aggregate evaluation metrics, such as the Average precision and Recall.
If the model objective has evaluation slices, the console shows a list of labels. You can click a label to view evaluation metrics for that label, as shown in the following example:
API
API requests for getting evaluation metrics is the same for each data type and objective, but the outputs are different. The following samples show the same request but different responses.
Getting aggregate model evaluation metrics
The aggregate model evaluation metrics provide information about the model as a whole. To see information about a specific slice, list the model evaluation slices.
To view aggregate model evaluation metrics, use the
projects.locations.models.evaluations.get
method.
Vertex AI returns an array of video action recognition metrics.
Each element shows evaluation metrics at different
precisionWindowLength
and confidenceThreshold
values.
By viewing evaluation metrics at different window length and confidence
threshold values, you can see how they affects other metrics such as precision
and recall.
Select a tab that corresponds to your language or environment:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where your model is stored.
- PROJECT: Your project ID.
- MODEL_ID: The ID of the model resource.
- PROJECT_NUMBER: Project number for your project.
- EVALUATION_ID: ID for the model evaluation (appears in the response).
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations"
PowerShell
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Vertex AI SDK for Python
To learn how to install the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Listing all evaluation slices
The
projects.locations.models.evaluations.slices.list
method lists all evaluation slices for your model. You must have the model's
evaluation ID, which you can get when you view the aggregated evaluation
metrics.
You can use model evaluation slices to determine how the model performed on a
specific label. The value
field tells you which label the metrics are for.
Vertex AI returns an array of video action recognition metrics.
Each element shows evaluation metrics at different
precisionWindowLength
and confidenceThreshold
values.
By viewing evaluation metrics at different window length and confidence
threshold values, you can see how they affects other metrics such as precision
and recall.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where Model is located. For example,
us-central1
. - PROJECT: Your project ID.
- MODEL_ID: The ID of your model.
- EVALUATION_ID: ID of the model evaluation that contains the evaluation slices to list.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices"
PowerShell
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID/evaluations/EVALUATION_ID/slices" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.