Interface ClassificationEvaluationMetrics.ConfidenceMetricsEntryOrBuilder (2.39.0)

public static interface ClassificationEvaluationMetrics.ConfidenceMetricsEntryOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getConfidenceThreshold()

public abstract float getConfidenceThreshold()

Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

float confidence_threshold = 1;

Returns
TypeDescription
float

The confidenceThreshold.

getF1Score()

public abstract float getF1Score()

Output only. The harmonic mean of recall and precision.

float f1_score = 4;

Returns
TypeDescription
float

The f1Score.

getF1ScoreAt1()

public abstract float getF1ScoreAt1()

Output only. The harmonic mean of recall_at1 and precision_at1.

float f1_score_at1 = 7;

Returns
TypeDescription
float

The f1ScoreAt1.

getFalseNegativeCount()

public abstract long getFalseNegativeCount()

Output only. The number of ground truth labels that are not matched by a model created label.

int64 false_negative_count = 12;

Returns
TypeDescription
long

The falseNegativeCount.

getFalsePositiveCount()

public abstract long getFalsePositiveCount()

Output only. The number of model created labels that do not match a ground truth label.

int64 false_positive_count = 11;

Returns
TypeDescription
long

The falsePositiveCount.

getFalsePositiveRate()

public abstract float getFalsePositiveRate()

Output only. False Positive Rate for the given confidence threshold.

float false_positive_rate = 8;

Returns
TypeDescription
float

The falsePositiveRate.

getFalsePositiveRateAt1()

public abstract float getFalsePositiveRateAt1()

Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

float false_positive_rate_at1 = 9;

Returns
TypeDescription
float

The falsePositiveRateAt1.

getPositionThreshold()

public abstract int getPositionThreshold()

Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.

int32 position_threshold = 14;

Returns
TypeDescription
int

The positionThreshold.

getPrecision()

public abstract float getPrecision()

Output only. Precision for the given confidence threshold.

float precision = 3;

Returns
TypeDescription
float

The precision.

getPrecisionAt1()

public abstract float getPrecisionAt1()

Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

float precision_at1 = 6;

Returns
TypeDescription
float

The precisionAt1.

getRecall()

public abstract float getRecall()

Output only. Recall (True Positive Rate) for the given confidence threshold.

float recall = 2;

Returns
TypeDescription
float

The recall.

getRecallAt1()

public abstract float getRecallAt1()

Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

float recall_at1 = 5;

Returns
TypeDescription
float

The recallAt1.

getTrueNegativeCount()

public abstract long getTrueNegativeCount()

Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

int64 true_negative_count = 13;

Returns
TypeDescription
long

The trueNegativeCount.

getTruePositiveCount()

public abstract long getTruePositiveCount()

Output only. The number of model created labels that match a ground truth label.

int64 true_positive_count = 10;

Returns
TypeDescription
long

The truePositiveCount.