Get explanations

This guide describes how to get explanations from a Model resource on Vertex AI. You can get explanations in two ways:

  • Online explanations: Synchronous requests to the Vertex AI API, similar to online inferences that return inferences with feature attributions.

  • Batch explanations: Asynchronous requests to the Vertex AI API that return inferences with feature attributions. Batch explanations are an optional part of batch inference requests.

Before you begin

Before getting explanations, you must do the following:

  1. This step differs depending on what type of machine learning model you use:

  2. If you want to get online explanations, deploy the Model that you created in the preceding step to an Endpoint resource.

Get online explanations

To get online explanations, follow most of the same steps that you would to get online inferences. However, instead of sending a projects.locations.endpoints.predict request to the Vertex AI API, send a projects.locations.endpoints.explain request.

The following guides provide detailed instructions for preparing and sending online explanation requests:

Get batch explanations

Only feature-based batch explanations are supported; you cannot get example-based batch explanations.

To get batch explanations, set the generateExplanation field to true when you create a batch inference job.

For detailed instructions about preparing and creating batch prediction jobs, read Getting batch inferences.

Get Concurrent Explanations

Explainable AI supports concurrent explanations. Concurrent explanations allow you to request both feature-based and example-based explanations from the same deployed model endpoint without having to deploy your model separately for each explanation method.

To get concurrent explanations, upload your model and configure either example-based or feature-based explanations. Then, deploy your model as usual.

After the model is deployed, you can request the configured explanations as usual. Additionally, you can request concurrent explanations by specifying concurrent_explanation_spec_override.

Note the following when using concurrent explanations:

  • Concurrent explanations are available using only the v1beta1 API version. If you're using the Vertex python SDK, you'll need to use the preview model to use concurrent explanations.
  • Example-based explanations cannot be requested after deploying with feature-based explanations. If you want both Example-based explanation and Feature-based explanations, deploy your model using Example-based explanations and request Feature-based using the concurrent explanation field.
  • Batch Explanations are not supported for Concurrent explanations. Online Explanations are the only way to use this feature.

Troubleshooting

This section describes troubleshooting steps that you might find helpful if you run into problems with while getting explanations.

Error: list index out of range

If you get the following error message when requesting explanations:

"error": "Explainability failed with exception: listindex out of range"

Make sure that you are not passing an empty array into a field that expects an array of objects. For example, if field1 accepts an array of objects, the following request body might result in an error:

{
  "instances": [
    {
      "field1": [],
    }
  ]
}

Instead, make sure the array is not empty, for example:

{
  "instances": [
    {
      "field1": [
        {}
      ],
    }
  ]
}

What's next