Evaluation(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Describes an evaluation between a machine learning model's predictions and ground truth labels. Created when an EvaluationJob runs successfully.
Attributes
Name | Description |
name |
str
Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}' |
config |
google.cloud.datalabeling_v1beta1.types.EvaluationConfig
Output only. Options used in the evaluation job that created this evaluation. |
evaluation_job_run_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp for when the evaluation job that created this evaluation ran. |
create_time |
google.protobuf.timestamp_pb2.Timestamp
Output only. Timestamp for when this evaluation was created. |
evaluation_metrics |
google.cloud.datalabeling_v1beta1.types.EvaluationMetrics
Output only. Metrics comparing predictions to ground truth labels. |
annotation_type |
google.cloud.datalabeling_v1beta1.types.AnnotationType
Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation. |
evaluated_item_count |
int
Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes. |