Reference documentation and code samples for the Cloud AutoML V1beta1 API class Google::Cloud::AutoML::V1beta1::ModelEvaluation.
Evaluation results of a model.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#annotation_spec_id
def annotation_spec_id() -> ::String
-
(::String) — Output only. The ID of the annotation spec that the model evaluation applies to. The
The ID is empty for the overall model evaluation.
For Tables annotation specs in the dataset do not exist and this ID is
always not set, but for CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] the display_name field is used.
#annotation_spec_id=
def annotation_spec_id=(value) -> ::String
-
value (::String) — Output only. The ID of the annotation spec that the model evaluation applies to. The
The ID is empty for the overall model evaluation.
For Tables annotation specs in the dataset do not exist and this ID is
always not set, but for CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] the display_name field is used.
-
(::String) — Output only. The ID of the annotation spec that the model evaluation applies to. The
The ID is empty for the overall model evaluation.
For Tables annotation specs in the dataset do not exist and this ID is
always not set, but for CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] the display_name field is used.
#classification_evaluation_metrics
def classification_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics) — Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.
#classification_evaluation_metrics=
def classification_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics) — Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.
- (::Google::Cloud::AutoML::V1beta1::ClassificationEvaluationMetrics) — Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.
#create_time
def create_time() -> ::Google::Protobuf::Timestamp
- (::Google::Protobuf::Timestamp) — Output only. Timestamp when this model evaluation was created.
#create_time=
def create_time=(value) -> ::Google::Protobuf::Timestamp
- value (::Google::Protobuf::Timestamp) — Output only. Timestamp when this model evaluation was created.
- (::Google::Protobuf::Timestamp) — Output only. Timestamp when this model evaluation was created.
#display_name
def display_name() -> ::String
-
(::String) — Output only. The value of
display_name at
the moment when the model was trained. Because this field returns a value
at model training time, for different models trained from the same dataset,
the values may differ, since display names could had been changed between
the two model's trainings.
For Tables CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
#display_name=
def display_name=(value) -> ::String
-
value (::String) — Output only. The value of
display_name at
the moment when the model was trained. Because this field returns a value
at model training time, for different models trained from the same dataset,
the values may differ, since display names could had been changed between
the two model's trainings.
For Tables CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
-
(::String) — Output only. The value of
display_name at
the moment when the model was trained. Because this field returns a value
at model training time, for different models trained from the same dataset,
the values may differ, since display names could had been changed between
the two model's trainings.
For Tables CLASSIFICATION
[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
#evaluated_example_count
def evaluated_example_count() -> ::Integer
-
(::Integer) — Output only. The number of examples used for model evaluation, i.e. for
which ground truth from time of model creation is compared against the
predicted annotations created by the model.
For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is
the total number of all examples used for evaluation.
Otherwise, this is the count of examples that according to the ground
truth were annotated by the
#evaluated_example_count=
def evaluated_example_count=(value) -> ::Integer
-
value (::Integer) — Output only. The number of examples used for model evaluation, i.e. for
which ground truth from time of model creation is compared against the
predicted annotations created by the model.
For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is
the total number of all examples used for evaluation.
Otherwise, this is the count of examples that according to the ground
truth were annotated by the
-
(::Integer) — Output only. The number of examples used for model evaluation, i.e. for
which ground truth from time of model creation is compared against the
predicted annotations created by the model.
For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is
the total number of all examples used for evaluation.
Otherwise, this is the count of examples that according to the ground
truth were annotated by the
#image_object_detection_evaluation_metrics
def image_object_detection_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::ImageObjectDetectionEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::ImageObjectDetectionEvaluationMetrics) — Model evaluation metrics for image object detection.
#image_object_detection_evaluation_metrics=
def image_object_detection_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::ImageObjectDetectionEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::ImageObjectDetectionEvaluationMetrics) — Model evaluation metrics for image object detection.
- (::Google::Cloud::AutoML::V1beta1::ImageObjectDetectionEvaluationMetrics) — Model evaluation metrics for image object detection.
#name
def name() -> ::String
-
(::String) — Output only. Resource name of the model evaluation.
Format:
projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
#name=
def name=(value) -> ::String
-
value (::String) — Output only. Resource name of the model evaluation.
Format:
projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
-
(::String) — Output only. Resource name of the model evaluation.
Format:
projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
#regression_evaluation_metrics
def regression_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::RegressionEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::RegressionEvaluationMetrics) — Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType.
#regression_evaluation_metrics=
def regression_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::RegressionEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::RegressionEvaluationMetrics) — Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType.
- (::Google::Cloud::AutoML::V1beta1::RegressionEvaluationMetrics) — Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType.
#text_extraction_evaluation_metrics
def text_extraction_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::TextExtractionEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::TextExtractionEvaluationMetrics) — Evaluation metrics for text extraction models.
#text_extraction_evaluation_metrics=
def text_extraction_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::TextExtractionEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::TextExtractionEvaluationMetrics) — Evaluation metrics for text extraction models.
- (::Google::Cloud::AutoML::V1beta1::TextExtractionEvaluationMetrics) — Evaluation metrics for text extraction models.
#text_sentiment_evaluation_metrics
def text_sentiment_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::TextSentimentEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::TextSentimentEvaluationMetrics) — Evaluation metrics for text sentiment models.
#text_sentiment_evaluation_metrics=
def text_sentiment_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::TextSentimentEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::TextSentimentEvaluationMetrics) — Evaluation metrics for text sentiment models.
- (::Google::Cloud::AutoML::V1beta1::TextSentimentEvaluationMetrics) — Evaluation metrics for text sentiment models.
#translation_evaluation_metrics
def translation_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::TranslationEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::TranslationEvaluationMetrics) — Model evaluation metrics for translation.
#translation_evaluation_metrics=
def translation_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::TranslationEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::TranslationEvaluationMetrics) — Model evaluation metrics for translation.
- (::Google::Cloud::AutoML::V1beta1::TranslationEvaluationMetrics) — Model evaluation metrics for translation.
#video_object_tracking_evaluation_metrics
def video_object_tracking_evaluation_metrics() -> ::Google::Cloud::AutoML::V1beta1::VideoObjectTrackingEvaluationMetrics
- (::Google::Cloud::AutoML::V1beta1::VideoObjectTrackingEvaluationMetrics) — Model evaluation metrics for video object tracking.
#video_object_tracking_evaluation_metrics=
def video_object_tracking_evaluation_metrics=(value) -> ::Google::Cloud::AutoML::V1beta1::VideoObjectTrackingEvaluationMetrics
- value (::Google::Cloud::AutoML::V1beta1::VideoObjectTrackingEvaluationMetrics) — Model evaluation metrics for video object tracking.
- (::Google::Cloud::AutoML::V1beta1::VideoObjectTrackingEvaluationMetrics) — Model evaluation metrics for video object tracking.