This guide describes how to get explanations from a Model
resource on
Vertex AI. You can get explanations in two ways:
Online explanations: Synchronous requests to the Vertex AI API, similar to online inferences that return inferences with feature attributions.
Batch explanations: Asynchronous requests to the Vertex AI API that return inferences with feature attributions. Batch explanations are an optional part of batch inference requests.
Before you begin
Before getting explanations, you must do the following:
This step differs depending on what type of machine learning model you use:
If you want to get explanations from a custom-trained model, then follow either Configuring example-based explanations or Configuring feature-based explanations to create a
Model
that supports Vertex Explainable AI.If you want to get explanations from an AutoML tabular classification or regression model, then train an AutoML model on a tabular dataset. There is no specific configuration required to use Vertex Explainable AI. Explanations for forecasting models aren't supported.
If you want to get explanations from an AutoML image classification model, then train an AutoML model on an image dataset and enable explanations when you deploy the model. There is no specific configuration required to use Vertex Explainable AI. Explanations for object detection models aren't supported.
If you want to get online explanations, deploy the
Model
that you created in the preceding step to anEndpoint
resource.
Get online explanations
To get online explanations, follow most of the same steps that you would to get
online inferences. However, instead of sending a
projects.locations.endpoints.predict
request to the
Vertex AI API, send a projects.locations.endpoints.explain
request.
The following guides provide detailed instructions for preparing and sending online explanation requests:
For AutoML image classification models, read Getting online inferences from AutoML models.
For AutoML tabular classification and regression models, read Get inferences from AutoML models.
For custom-trained models, read Getting online inferences from custom-trained models.
Get batch explanations
Only feature-based batch explanations are supported; you cannot get example-based batch explanations.
To get batch explanations, set the generateExplanation
field
to true
when you create a batch inference job.
For detailed instructions about preparing and creating batch prediction jobs, read Getting batch inferences.
Get Concurrent Explanations
Explainable AI supports concurrent explanations. Concurrent explanations allow you to request both feature-based and example-based explanations from the same deployed model endpoint without having to deploy your model separately for each explanation method.
To get concurrent explanations, upload your model and configure either example-based or feature-based explanations. Then, deploy your model as usual.
After the model is deployed, you can request the configured explanations as usual.
Additionally, you can request concurrent explanations by specifying
concurrent_explanation_spec_override
.
Note the following when using concurrent explanations:
- Concurrent explanations are available using only the
v1beta1
API version. If you're using the Vertex python SDK, you'll need to use thepreview
model to use concurrent explanations. - Example-based explanations cannot be requested after deploying with feature-based explanations. If you want both Example-based explanation and Feature-based explanations, deploy your model using Example-based explanations and request Feature-based using the concurrent explanation field.
- Batch Explanations are not supported for Concurrent explanations. Online Explanations are the only way to use this feature.
Troubleshooting
This section describes troubleshooting steps that you might find helpful if you run into problems with while getting explanations.
Error: list index out of range
If you get the following error message when requesting explanations:
"error": "Explainability failed with exception: listindex out of range"
Make sure that you are not passing an empty array into a field that expects an
array of objects. For example, if field1
accepts an array of objects, the
following request body might result in an error:
{
"instances": [
{
"field1": [],
}
]
}
Instead, make sure the array is not empty, for example:
{
"instances": [
{
"field1": [
{}
],
}
]
}
What's next
- Based on the explanations you receive, learn how to adjust your
Model
to improve explanations. - Try a sample notebook demonstrating Vertex Explainable AI on tabular data or image data.