Cloud AutoML V1beta1 API - Class Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry (v0.5.5)

Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry.

Metrics for a single confidence threshold.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#confidence_threshold

def confidence_threshold() -> ::Float
Returns
  • (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

#confidence_threshold=

def confidence_threshold=(value) -> ::Float
Parameter
  • value (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.
Returns
  • (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

#f1_score

def f1_score() -> ::Float
Returns
  • (::Float) — Output only. The harmonic mean of recall and precision.

#f1_score=

def f1_score=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The harmonic mean of recall and precision.
Returns
  • (::Float) — Output only. The harmonic mean of recall and precision.

#f1_score_at1

def f1_score_at1() -> ::Float
Returns

#f1_score_at1=

def f1_score_at1=(value) -> ::Float
Parameter
Returns

#false_negative_count

def false_negative_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.

#false_negative_count=

def false_negative_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.
Returns
  • (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.

#false_positive_count

def false_positive_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of model created labels that do not match a ground truth label.

#false_positive_count=

def false_positive_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of model created labels that do not match a ground truth label.
Returns
  • (::Integer) — Output only. The number of model created labels that do not match a ground truth label.

#false_positive_rate

def false_positive_rate() -> ::Float
Returns
  • (::Float) — Output only. False Positive Rate for the given confidence threshold.

#false_positive_rate=

def false_positive_rate=(value) -> ::Float
Parameter
  • value (::Float) — Output only. False Positive Rate for the given confidence threshold.
Returns
  • (::Float) — Output only. False Positive Rate for the given confidence threshold.

#false_positive_rate_at1

def false_positive_rate_at1() -> ::Float
Returns
  • (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#false_positive_rate_at1=

def false_positive_rate_at1=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Returns
  • (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#position_threshold

def position_threshold() -> ::Integer
Returns
  • (::Integer) — Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.

#position_threshold=

def position_threshold=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.
Returns
  • (::Integer) — Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.

#precision

def precision() -> ::Float
Returns
  • (::Float) — Output only. Precision for the given confidence threshold.

#precision=

def precision=(value) -> ::Float
Parameter
  • value (::Float) — Output only. Precision for the given confidence threshold.
Returns
  • (::Float) — Output only. Precision for the given confidence threshold.

#precision_at1

def precision_at1() -> ::Float
Returns
  • (::Float) — Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#precision_at1=

def precision_at1=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Returns
  • (::Float) — Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#recall

def recall() -> ::Float
Returns
  • (::Float) — Output only. Recall (True Positive Rate) for the given confidence threshold.

#recall=

def recall=(value) -> ::Float
Parameter
  • value (::Float) — Output only. Recall (True Positive Rate) for the given confidence threshold.
Returns
  • (::Float) — Output only. Recall (True Positive Rate) for the given confidence threshold.

#recall_at1

def recall_at1() -> ::Float
Returns
  • (::Float) — Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#recall_at1=

def recall_at1=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Returns
  • (::Float) — Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#true_negative_count

def true_negative_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

#true_negative_count=

def true_negative_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.
Returns
  • (::Integer) — Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.

#true_positive_count

def true_positive_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of model created labels that match a ground truth label.

#true_positive_count=

def true_positive_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of model created labels that match a ground truth label.
Returns
  • (::Integer) — Output only. The number of model created labels that match a ground truth label.