AggregateClassificationMetrics( mapping=None, *, ignore_unknown_fields=False, **kwargs )
Aggregate metrics for classification/classifier models. For multi-class models, the metrics are either macro-averaged or micro-averaged. When macro-averaged, the metrics are calculated for each label and then an unweighted average is taken of those values. When micro-averaged, the metric is calculated globally by counting the total number of correctly predicted rows.
Precision is the fraction of actual positive predictions that had positive actual labels. For multiclass this is a macro-averaged metric treating each class as a binary classifier.
Recall is the fraction of actual positive labels that were given a positive prediction. For multiclass this is a macro-averaged metric.
Accuracy is the fraction of predictions given the correct label. For multiclass this is a micro-averaged metric.
Threshold at which the metrics are computed. For binary classification models this is the positive class threshold. For multi-class classfication models this is the confidence threshold.
The F1 score is an average of recall and precision. For multiclass this is a macro- averaged metric.
Logarithmic Loss. For multiclass this is a macro-averaged metric.
Area Under a ROC Curve. For multiclass this is a macro-averaged metric.
Inheritancebuiltins.object > proto.message.Message > AggregateClassificationMetrics
Delete the value on the given field.
This is generally equivalent to setting a falsy value.
Return True if the messages are equal, False otherwise.
Return True if the messages are unequal, False otherwise.
Set the value on the given field.
For well-known protocol buffer types which are marshalled, either the protocol buffer object or the Python equivalent is accepted.