Cloud AutoML V1beta1 API - Class Google::Cloud::AutoML::V1beta1::TablesModelColumnInfo (v0.6.0)

Stay organized with collections Save and categorize content based on your preferences.

Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::TablesModelColumnInfo.

An information specific to given column and Tables Model, in context of the Model and the predictions created by it.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#column_display_name

def column_display_name() -> ::String
Returns
  • (::String) — Output only. The display name of the column (same as the display_name of its ColumnSpec).

#column_display_name=

def column_display_name=(value) -> ::String
Parameter
  • value (::String) — Output only. The display name of the column (same as the display_name of its ColumnSpec).
Returns
  • (::String) — Output only. The display name of the column (same as the display_name of its ColumnSpec).

#column_spec_name

def column_spec_name() -> ::String
Returns
  • (::String) — Output only. The name of the ColumnSpec describing the column. Not populated when this proto is outputted to BigQuery.

#column_spec_name=

def column_spec_name=(value) -> ::String
Parameter
  • value (::String) — Output only. The name of the ColumnSpec describing the column. Not populated when this proto is outputted to BigQuery.
Returns
  • (::String) — Output only. The name of the ColumnSpec describing the column. Not populated when this proto is outputted to BigQuery.

#feature_importance

def feature_importance() -> ::Float
Returns
  • (::Float) — Output only. When given as part of a Model (always populated): Measurement of how much model predictions correctness on the TEST data depend on values in this column. A value between 0 and 1, higher means higher influence. These values are normalized - for all input feature columns of a given model they add to 1.

    When given back by Predict (populated iff [feature_importance param][google.cloud.automl.v1beta1.PredictRequest.params] is set) or Batch Predict (populated iff feature_importance param is set): Measurement of how impactful for the prediction returned for the given row the value in this column was. Specifically, the feature importance specifies the marginal contribution that the feature made to the prediction score compared to the baseline score. These values are computed using the Sampled Shapley method.

#feature_importance=

def feature_importance=(value) -> ::Float
Parameter
  • value (::Float) — Output only. When given as part of a Model (always populated): Measurement of how much model predictions correctness on the TEST data depend on values in this column. A value between 0 and 1, higher means higher influence. These values are normalized - for all input feature columns of a given model they add to 1.

    When given back by Predict (populated iff [feature_importance param][google.cloud.automl.v1beta1.PredictRequest.params] is set) or Batch Predict (populated iff feature_importance param is set): Measurement of how impactful for the prediction returned for the given row the value in this column was. Specifically, the feature importance specifies the marginal contribution that the feature made to the prediction score compared to the baseline score. These values are computed using the Sampled Shapley method.

Returns
  • (::Float) — Output only. When given as part of a Model (always populated): Measurement of how much model predictions correctness on the TEST data depend on values in this column. A value between 0 and 1, higher means higher influence. These values are normalized - for all input feature columns of a given model they add to 1.

    When given back by Predict (populated iff [feature_importance param][google.cloud.automl.v1beta1.PredictRequest.params] is set) or Batch Predict (populated iff feature_importance param is set): Measurement of how impactful for the prediction returned for the given row the value in this column was. Specifically, the feature importance specifies the marginal contribution that the feature made to the prediction score compared to the baseline score. These values are computed using the Sampled Shapley method.