- 0.61.0 (latest)
- 0.60.0
- 0.59.0
- 0.58.0
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::EvaluatedAnnotation.
True positive, false positive, or false negative.
EvaluatedAnnotation is only available under ModelEvaluationSlice with slice
of annotationSpec
dimension.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#data_item_payload
def data_item_payload() -> ::Google::Protobuf::Value
- (::Google::Protobuf::Value) — Output only. The data item payload that the Model predicted this EvaluatedAnnotation on.
#error_analysis_annotations
def error_analysis_annotations() -> ::Array<::Google::Cloud::AIPlatform::V1::ErrorAnalysisAnnotation>
- (::Array<::Google::Cloud::AIPlatform::V1::ErrorAnalysisAnnotation>) — Annotations of model error analysis results.
#error_analysis_annotations=
def error_analysis_annotations=(value) -> ::Array<::Google::Cloud::AIPlatform::V1::ErrorAnalysisAnnotation>
- value (::Array<::Google::Cloud::AIPlatform::V1::ErrorAnalysisAnnotation>) — Annotations of model error analysis results.
- (::Array<::Google::Cloud::AIPlatform::V1::ErrorAnalysisAnnotation>) — Annotations of model error analysis results.
#evaluated_data_item_view_id
def evaluated_data_item_view_id() -> ::String
- (::String) — Output only. ID of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on data_item_payload.
#explanations
def explanations() -> ::Array<::Google::Cloud::AIPlatform::V1::EvaluatedAnnotationExplanation>
-
(::Array<::Google::Cloud::AIPlatform::V1::EvaluatedAnnotationExplanation>) — Explanations of
predictions.
Each element of the explanations indicates the explanation for one
explanation Method.
The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.
#explanations=
def explanations=(value) -> ::Array<::Google::Cloud::AIPlatform::V1::EvaluatedAnnotationExplanation>
-
value (::Array<::Google::Cloud::AIPlatform::V1::EvaluatedAnnotationExplanation>) — Explanations of
predictions.
Each element of the explanations indicates the explanation for one
explanation Method.
The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.
-
(::Array<::Google::Cloud::AIPlatform::V1::EvaluatedAnnotationExplanation>) — Explanations of
predictions.
Each element of the explanations indicates the explanation for one
explanation Method.
The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.
#ground_truths
def ground_truths() -> ::Array<::Google::Protobuf::Value>
-
(::Array<::Google::Protobuf::Value>) — Output only. The ground truth Annotations, i.e. the Annotations that exist
in the test data the Model is evaluated on.
For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions.
For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions, but not enough for a match.
For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model.
The schema of the ground truth is stored in ModelEvaluation.annotation_schema_uri
#predictions
def predictions() -> ::Array<::Google::Protobuf::Value>
-
(::Array<::Google::Protobuf::Value>) — Output only. The model predicted annotations.
For true positive, there is one and only one prediction, which matches the only one ground truth annotation in ground_truths.
For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding data_item_view_id.
For false negative, there are zero or more predictions which are similar to the only ground truth annotation in ground_truths but not enough for a match.
The schema of the prediction is stored in ModelEvaluation.annotation_schema_uri
#type
def type() -> ::Google::Cloud::AIPlatform::V1::EvaluatedAnnotation::EvaluatedAnnotationType
- (::Google::Cloud::AIPlatform::V1::EvaluatedAnnotation::EvaluatedAnnotationType) — Output only. Type of the EvaluatedAnnotation.