Model evaluation metrics for image object detection problems.
Evaluates prediction quality of labeled bounding boxes.
Attributes
Name
Description
evaluated_bounding_box_count
int
Output only. The total number of bounding
boxes (i.e. summed over all images) the ground
truth used to create this evaluation had.
bounding_box_metrics_entries
Sequence[google.cloud.automl_v1beta1.types.BoundingBoxMetricsEntry]
Output only. The bounding boxes match metrics
for each Intersection-over-union threshold
0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each
label confidence threshold
0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.
bounding_box_mean_average_precision
float
Output only. The single metric for bounding boxes
evaluation: the mean_average_precision averaged over all
bounding_box_metrics_entries.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-04 UTC."],[],[]]