Get explanations

This guide describes how to get explanations from a Model resource on Vertex AI. You can get explanations in two ways:

  • Online explanations: Synchronous requests to the Vertex AI API, similar to online predictions that return predictions with feature attributions.

  • Batch explanations: Asynchronous requests to the Vertex AI API that return predictions with feature attributions. Batch explanations are an optional part of batch prediction requests.

Before you begin

Before getting explanations, you must do the following:

  1. This step differs depending on what type of machine learning model you use:

  2. If you want to get online explanations, deploy the Model that you created in the preceding step to an Endpoint resource.

Get online explanations

To get online explanations, follow most of the same steps that you would to get online predictions. However, instead of sending a projects.locations.endpoints.predict request to the Vertex AI API, send a projects.locations.endpoints.explain request.

The following guides provide detailed instructions for preparing and sending online explanation requests:

Get batch explanations

To get batch explanations, set the generateExplanation field to true when you create a batch prediction job.

For detailed instructions about preparing and creating batch prediction jobs, read Getting batch predictions.

Get explanations locally in Vertex AI Workbench user-managed notebooks

In Vertex AI Workbench user-managed notebooks, you can generate explanations for your custom-trained model by running Vertex Explainable AI within your notebook's local kernel or runtime without deploying the model to Vertex AI to get explanations. Using local explanations allows you to try out different Vertex Explainable AI settings without adjusting your Vertex AI model deployment for each change. This makes it easier and faster to evaluate the impact of using different baselines, trying different visualization settings for your explanations, or adjusting the number of steps or paths used for your algorithm.

Local explanations are available only within user-managed notebooks, so this feature does not work in Jupyter notebooks that are run outside of a user-managed notebooks instance.

To generate explanations locally in a user-managed notebooks instance:

  • Create a user-managed notebooks instance
  • Launch the JupyterLab environment from your user-managed notebooks instance, and then create or import a notebook.
  • Save the model artifact to your notebook's local environment, or a Cloud Storage bucket.
  • Generate and save metadata to describe your model and configure your explanation request.

Use the Explainable AI SDK in user-managed notebooks

The Explainable AI SDK is pre-installed in user-managed notebooks instances. Within your notebook, you can use the Explainable AI SDK to save your model artifact and automatically identify metadata about your model's inputs and outputs for the explanation request. You can also specify other parameters to configure your explanation request, and then visualize the explanation results.

You can save models and metadata either in your notebook's local environment, or in a Cloud Storage bucket. If you're using TensorFlow, you can use the save_model_with_metadata() method to infer your model's inputs and outputs, and save this explanation metadata with your model.

Next, load the model into the Explainable AI SDK using load_model_from_local_path(). If needed, you can adjust the configuration for the specific Vertex Explainable AI algorithm. For example, you can change the number of paths to use for Sampled Shapley, or the number of steps to use for integated gradients or XRAI.

Finally, call explain() with instances of data, and visualize the feature attributions.

You can use the following example code to get local explanations for a TensorFlow 2 model within a user-managed notebooks instance:

# This sample code only works within a user-managed notebooks instance.
import explainable_ai_sdk
from import SavedModelMetadataBuilder

metadata_and_model_builder = SavedModelMetadataBuilder('LOCAL_PATH_TO_MODEL')

# Load the model and adjust the configuration for Explainable AI parameters
num_paths = 20
model_artifact_with_metadata = explainable_ai_sdk.load_model_from_local_path(

# Explainable AI supports generating explanations for multiple predictions
instances = [{feature_a: 10, feature_2: 100,...}, ... ]
explanations = model_artifact_with_metadata.explain(instances)

See more information about the Explainable AI SDK, including different configurations and parameters. Learn more about Vertex AI Workbench user-managed notebooks.

What's next