Method: projects.locations.endpoints.explain

Perform an online explanation.

If deployedModelId is specified, the corresponding endpoints.deployModel must have explanationSpec populated. If deployedModelId is not specified, all DeployedModels must have explanationSpec populated. Only deployed AutoML tabular Models have explanationSpec.

HTTP request

POST https://{service-endpoint}/v1beta1/{endpoint}:explain

Where {service-endpoint} is one of the supported service endpoints.

Path parameters

Parameters
endpoint

string

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

Request body

The request body contains data with the following structure:

JSON representation
{
  "instances": [
    value
  ],
  "parameters": value,
  "explanationSpecOverride": {
    object (ExplanationSpecOverride)
  },
  "deployedModelId": string
}
Fields
instances[]

value (Value format)

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instanceSchemaUri.

parameters

value (Value format)

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parametersSchemaUri.

explanationSpecOverride

object (ExplanationSpecOverride)

If specified, overrides the explanationSpec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results.

deployedModelId

string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.

Response body

If successful, the response body contains data with the following structure:

Response message for PredictionService.Explain.

JSON representation
{
  "explanations": [
    {
      object (Explanation)
    }
  ],
  "deployedModelId": string,
  "predictions": [
    value
  ]
}
Fields
explanations[]

object (Explanation)

The explanations of the Model's PredictResponse.predictions.

It has the same number of elements as instances to be explained.

deployedModelId

string

ID of the Endpoint's DeployedModel that served this explanation.

predictions[]

value (Value format)

The predictions that are the output of the predictions call. Same as PredictResponse.predictions.

Authorization Scopes

Requires the following OAuth scope:

  • https://www.googleapis.com/auth/cloud-platform

For more information, see the Authentication Overview.

IAM Permissions

Requires the following IAM permission on the endpoint resource:

  • aiplatform.endpoints.explain

For more information, see the IAM documentation.

ExplanationSpecOverride

The ExplanationSpec entries that can be overridden at online explanation time.

JSON representation
{
  "parameters": {
    object (ExplanationParameters)
  },
  "metadata": {
    object (ExplanationMetadataOverride)
  }
}
Fields
parameters

object (ExplanationParameters)

The parameters to be overridden. Note that the [method][google.cloud.aiplatform.v1beta1.ExplanationParameters.method] cannot be changed. If not specified, no parameter is overridden.

metadata

object (ExplanationMetadataOverride)

The metadata to be overridden. If not specified, no metadata is overridden.

ExplanationMetadataOverride

The ExplanationMetadata entries that can be overridden at online explanation time.

JSON representation
{
  "inputs": {
    string: {
      object (InputMetadataOverride)
    },
    ...
  }
}
Fields
inputs

map (key: string, value: object (InputMetadataOverride))

Required. Overrides the input metadata of the features. The key is the name of the feature to be overridden. The keys specified here must exist in the input metadata to be overridden. If a feature is not specified here, the corresponding feature's input metadata is not overridden.

InputMetadataOverride

The input metadata entries to be overridden.

JSON representation
{
  "inputBaselines": [
    value
  ]
}
Fields
inputBaselines[]

value (Value format)

Baseline inputs for this feature.

This overrides the input_baseline field of the ExplanationMetadata.InputMetadata object of the corresponding feature's input metadata. If it's not specified, the original baselines are not overridden.

Explanation

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.

JSON representation
{
  "attributions": [
    {
      object (Attribution)
    }
  ]
}
Fields
attributions[]

object (Attribution)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

If users set ExplanationParameters.top_k, the attributions are sorted by [instanceOutputValue][Attributions.instance_output_value] in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the outputIndices.