- 3.0.0 (latest)
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.30.0
- 2.29.3
- 2.28.0
- 2.27.1
- 2.26.0
- 2.25.0
- 2.24.2
- 2.23.0
- 2.22.0
- 2.21.1
- 2.20.2
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.1
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.1
- 2.3.0
- 2.2.0
- 2.1.0
- 2.0.3
- 1.5.1
- 1.4.2
- 1.3.0
- 1.2.1
- 1.1.0
- 1.0.0
- 0.5.2
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
MultiConfidenceMetrics(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Metrics across multiple confidence levels.
Attributes |
|
---|---|
Name | Description |
confidence_level_metrics |
MutableSequence[google.cloud.documentai_v1.types.Evaluation.ConfidenceLevelMetrics]
Metrics across confidence levels with fuzzy matching enabled. |
confidence_level_metrics_exact |
MutableSequence[google.cloud.documentai_v1.types.Evaluation.ConfidenceLevelMetrics]
Metrics across confidence levels with only exact matching. |
auprc |
float
The calculated area under the precision recall curve (AUPRC), computed by integrating over all confidence thresholds. |
estimated_calibration_error |
float
The Estimated Calibration Error (ECE) of the confidence of the predicted entities. |
auprc_exact |
float
The AUPRC for metrics with fuzzy matching disabled, i.e., exact matching only. |
estimated_calibration_error_exact |
float
The ECE for the predicted entities with fuzzy matching disabled, i.e., exact matching only. |
metrics_type |
google.cloud.documentai_v1.types.Evaluation.MultiConfidenceMetrics.MetricsType
The metrics type for the label. |
Classes
MetricsType
MetricsType(value)
A type that determines how metrics should be interpreted.