- 3.25.0 (latest)
- 3.24.0
- 3.23.1
- 3.22.0
- 3.21.0
- 3.20.1
- 3.19.0
- 3.18.0
- 3.17.2
- 3.16.0
- 3.15.0
- 3.14.1
- 3.13.0
- 3.12.0
- 3.11.4
- 3.4.0
- 3.3.6
- 3.2.0
- 3.1.0
- 3.0.1
- 2.34.4
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.1
- 2.29.0
- 2.28.1
- 2.27.1
- 2.26.0
- 2.25.2
- 2.24.1
- 2.23.3
- 2.22.1
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.0
- 2.14.0
- 2.13.1
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.2
- 2.5.0
- 2.4.0
- 2.3.1
- 2.2.0
- 2.1.0
- 2.0.0
- 1.28.2
- 1.27.2
- 1.26.1
- 1.25.0
- 1.24.0
- 1.23.1
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
AggregateClassificationMetrics(
mapping=None, *, ignore_unknown_fields=False, **kwargs
)
Aggregate metrics for classification/classifier models. For multi-class models, the metrics are either macro-averaged or micro-averaged. When macro-averaged, the metrics are calculated for each label and then an unweighted average is taken of those values. When micro-averaged, the metric is calculated globally by counting the total number of correctly predicted rows.
Attributes
Name | Description |
precision |
google.protobuf.wrappers_pb2.DoubleValue
Precision is the fraction of actual positive predictions that had positive actual labels. For multiclass this is a macro-averaged metric treating each class as a binary classifier. |
recall |
google.protobuf.wrappers_pb2.DoubleValue
Recall is the fraction of actual positive labels that were given a positive prediction. For multiclass this is a macro-averaged metric. |
accuracy |
google.protobuf.wrappers_pb2.DoubleValue
Accuracy is the fraction of predictions given the correct label. For multiclass this is a micro-averaged metric. |
threshold |
google.protobuf.wrappers_pb2.DoubleValue
Threshold at which the metrics are computed. For binary classification models this is the positive class threshold. For multi-class classfication models this is the confidence threshold. |
f1_score |
google.protobuf.wrappers_pb2.DoubleValue
The F1 score is an average of recall and precision. For multiclass this is a macro- averaged metric. |
log_loss |
google.protobuf.wrappers_pb2.DoubleValue
Logarithmic Loss. For multiclass this is a macro-averaged metric. |
roc_auc |
google.protobuf.wrappers_pb2.DoubleValue
Area Under a ROC Curve. For multiclass this is a macro-averaged metric. |
Inheritance
builtins.object > proto.message.Message > AggregateClassificationMetricsMethods
__delattr__
__delattr__(key)
Delete the value on the given field.
This is generally equivalent to setting a falsy value.
__eq__
__eq__(other)
Return True if the messages are equal, False otherwise.
__ne__
__ne__(other)
Return True if the messages are unequal, False otherwise.
__setattr__
__setattr__(key, value)
Set the value on the given field.
For well-known protocol buffer types which are marshalled, either the protocol buffer object or the Python equivalent is accepted.