Class ClassificationEvaluationMetrics

Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.

Attributes
NameDescription
floatau_prc
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.
floatbase_au_prc
Output only. The Area Under Precision-Recall Curve metric based on priors. Micro-averaged for the overall evaluation. Deprecated.
floatau_roc
Output only. The Area Under Receiver Operating Characteristic curve metric. Micro- averaged for the overall evaluation.
floatlog_loss
Output only. The Log Loss metric.
Sequence[google.cloud.automl_v1beta1.types.ClassificationEvaluationMetrics.ConfidenceMetricsEntry]confidence_metrics_entry
Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
google.cloud.automl_v1beta1.types.ClassificationEvaluationMetrics.ConfusionMatrixconfusion_matrix
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
Sequence[str]annotation_spec_id
Output only. The annotation spec ids used for this evaluation.

Inheritance

builtins.object > proto.message.Message > ClassificationEvaluationMetrics

Classes

ConfidenceMetricsEntry

ConfidenceMetricsEntry(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Metrics for a single confidence threshold.

ConfusionMatrix

ConfusionMatrix(mapping=None, *, ignore_unknown_fields=False, **kwargs)

Confusion matrix of the model running the classification.