Class ClassificationEvaluationMetrics (2.8.2)

Stay organized with collections Save and categorize content based on your preferences.
    mapping=None, *, ignore_unknown_fields=False, **kwargs

Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.


au_prc float
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.
base_au_prc float
Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.
au_roc float
Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.
log_loss float
Output only. The Log Loss metric.
confidence_metrics_entry Sequence[]
Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
annotation_spec_id Sequence[str]
Output only. The annotation spec ids used for this evaluation.


builtins.object > proto.message.Message > ClassificationEvaluationMetrics



ConfidenceMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Metrics for a single confidence threshold.


ConfusionMatrix(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Confusion matrix of the model running the classification.