Cloud AutoML V1beta1 API - Class Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry (v0.10.2)

Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics::ConfidenceMetricsEntry.

Metrics for a single confidence threshold.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#confidence_threshold

def confidence_threshold() -> ::Float
Returns
  • (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

#confidence_threshold=

def confidence_threshold=(value) -> ::Float
Parameter
  • value (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.
Returns
  • (::Float) — Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.

#f1_score

def f1_score() -> ::Float
Returns
  • (::Float) — Output only. The harmonic mean of recall and precision.

#f1_score=

def f1_score=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The harmonic mean of recall and precision.
Returns
  • (::Float) — Output only. The harmonic mean of recall and precision.

#f1_score_at1

def f1_score_at1() -> ::Float
Returns

#f1_score_at1=

def f1_score_at1=(value) -> ::Float
Parameter
Returns

#false_negative_count

def false_negative_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.

#false_negative_count=

def false_negative_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.
Returns
  • (::Integer) — Output only. The number of ground truth labels that are not matched by a model created label.

#false_positive_count

def false_positive_count() -> ::Integer
Returns
  • (::Integer) — Output only. The number of model created labels that do not match a ground truth label.

#false_positive_count=

def false_positive_count=(value) -> ::Integer
Parameter
  • value (::Integer) — Output only. The number of model created labels that do not match a ground truth label.
Returns
  • (::Integer) — Output only. The number of model created labels that do not match a ground truth label.

#false_positive_rate

def false_positive_rate() -> ::Float
Returns
  • (::Float) — Output only. False Positive Rate for the given confidence threshold.

#false_positive_rate=

def false_positive_rate=(value) -> ::Float
Parameter
  • value (::Float) — Output only. False Positive Rate for the given confidence threshold.
Returns
  • (::Float) — Output only. False Positive Rate for the given confidence threshold.

#false_positive_rate_at1

def false_positive_rate_at1() -> ::Float
Returns
  • (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#false_positive_rate_at1=

def false_positive_rate_at1=(value) -> ::Float
Parameter
  • value (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Returns
  • (::Float) — Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.

#position_threshold

def position_threshold() -> ::Integer
Returns
  • (::Integer) — Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.