Required. The name of the parent ModelEvaluationSlice resource. Format: projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}
Request body
The request body contains data with the following structure:
For true positive, there is one and only one prediction, which matches the only one ground truth annotation in groundTruths.
For false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding data_item_view_id.
For false negative, there are zero or more predictions which are similar to the only ground truth annotation in groundTruths but not enough for a match.
Output only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on.
For true positive, there is one and only one ground truth annotation, which matches the only prediction in predictions.
For false positive, there are zero or more ground truth annotations that are similar to the only prediction in predictions, but not enough for a match.
For false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model.
Output only. The data item payload that the Model predicted this EvaluatedAnnotation on.
evaluatedDataItemViewId
string
Output only. id of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on dataItemPayload.
Explanations of predictions. Each element of the explanations indicates the explanation for one explanation method.
The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# Method: slices.batchImport\n\n**Full name**: projects.locations.models.evaluations.slices.batchImport\n\nImports a list of externally generated EvaluatedAnnotations. \n\n### Endpoint\n\npost `https:``/``/{service-endpoint}``/v1``/{parent}:batchImport` \nWhere `{service-endpoint}` is one of the [supported service endpoints](/vertex-ai/docs/reference/rest#rest_endpoints).\n\n### Path parameters\n\n`parent` `string` \nRequired. The name of the parent ModelEvaluationSlice resource. Format: `projects/{project}/locations/{location}/models/{model}/evaluations/{evaluation}/slices/{slice}`\n\n### Request body\n\nThe request body contains data with the following structure:\nFields `evaluatedAnnotations[]` `object (`[EvaluatedAnnotation](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation)`)` \nRequired. Evaluated annotations resource to be imported. \n\n### Response body\n\nResponse message for [ModelService.BatchImportEvaluatedAnnotations](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#google.cloud.aiplatform.v1.ModelService.BatchImportEvaluatedAnnotations)\n\nIf successful, the response body contains data with the following structure:\nFields `importedEvaluatedAnnotationsCount` `integer` \nOutput only. Number of EvaluatedAnnotations imported. \n\nEvaluatedAnnotation\n-------------------\n\nTrue positive, false positive, or false negative.\n\nEvaluatedAnnotation is only available under ModelEvaluationSlice with slice of `annotationSpec` dimension.\nFields `type` `enum (`[EvaluatedAnnotationType](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotationType)`)` \nOutput only. type of the EvaluatedAnnotation.\n`predictions[]` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nOutput only. The model predicted annotations.\n\nFor true positive, there is one and only one prediction, which matches the only one ground truth annotation in [groundTruths](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.ground_truths).\n\nFor false positive, there is one and only one prediction, which doesn't match any ground truth annotation of the corresponding [data_item_view_id](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.evaluated_data_item_view_id).\n\nFor false negative, there are zero or more predictions which are similar to the only ground truth annotation in [groundTruths](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.ground_truths) but not enough for a match.\n\nThe schema of the prediction is stored in [ModelEvaluation.annotation_schema_uri](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.annotation_schema_uri)\n`groundTruths[]` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nOutput only. The ground truth Annotations, i.e. the Annotations that exist in the test data the Model is evaluated on.\n\nFor true positive, there is one and only one ground truth annotation, which matches the only prediction in [predictions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.predictions).\n\nFor false positive, there are zero or more ground truth annotations that are similar to the only prediction in [predictions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.predictions), but not enough for a match.\n\nFor false negative, there is one and only one ground truth annotation, which doesn't match any predictions created by the model.\n\nThe schema of the ground truth is stored in [ModelEvaluation.annotation_schema_uri](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.annotation_schema_uri)\n`dataItemPayload` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nOutput only. The data item payload that the Model predicted this EvaluatedAnnotation on.\n`evaluatedDataItemViewId` `string` \nOutput only. id of the EvaluatedDataItemView under the same ancestor ModelEvaluation. The EvaluatedDataItemView consists of all ground truths and predictions on [dataItemPayload](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.data_item_payload).\n`explanations[]` `object (`[EvaluatedAnnotationExplanation](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotationExplanation)`)` \nExplanations of [predictions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.predictions). Each element of the explanations indicates the explanation for one explanation method.\n\nThe attributions list in the [EvaluatedAnnotationExplanation.explanation](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotationExplanation.FIELDS.explanation) object corresponds to the [predictions](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#EvaluatedAnnotation.FIELDS.predictions) list. For example, the second element in the attributions list explains the second element in the predictions list.\n`errorAnalysisAnnotations[]` `object (`[ErrorAnalysisAnnotation](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#ErrorAnalysisAnnotation)`)` \nAnnotations of model error analysis results. \n\nEvaluatedAnnotationType\n-----------------------\n\nDescribes the type of the EvaluatedAnnotation. The type is determined\n\nEvaluatedAnnotationExplanation\n------------------------------\n\nExplanation result of the prediction produced by the Model.\nFields `explanationType` `string` \nExplanation type.\n\nFor AutoML Image Classification models, possible values are:\n\n- `image-integrated-gradients`\n- `image-xrai`\n`explanation` `object (`[Explanation](/vertex-ai/docs/reference/rest/v1/Explanation)`)` \nExplanation attribution response details. \n\nErrorAnalysisAnnotation\n-----------------------\n\nModel error analysis for each annotation.\nFields `attributedItems[]` `object (`[AttributedItem](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#AttributedItem)`)` \nAttributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.\n`queryType` `enum (`[QueryType](/vertex-ai/docs/reference/rest/v1/projects.locations.models.evaluations.slices/batchImport#QueryType)`)` \nThe query type used for finding the attributed items.\n`outlierScore` `number` \nThe outlier score of this annotated item. Usually defined as the min of all distances from attributed items.\n`outlierThreshold` `number` \nThe threshold used to determine if this annotation is an outlier or not. \n\nAttributedItem\n--------------\n\nAttributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.\nFields `annotationResourceName` `string` \nThe unique id for each annotation. Used by FE to allocate the annotation in DB.\n`distance` `number` \nThe distance of this item to the annotation. \n\nQueryType\n---------\n\nThe query type used for finding the attributed items."]]