Get explanations

This guide describes how to get explanations from a Model resource on Vertex AI. You can get explanations in two ways:

  • Online explanations: Synchronous requests to the Vertex AI API, similar to online predictions that return predictions with feature attributions.

  • Batch explanations: Asynchronous requests to the Vertex AI API that return predictions with feature attributions. Batch explanations are an optional part of batch prediction requests.

Before you begin

Before getting explanations, you must do the following:

  1. This step differs depending on what type of machine learning model you use:

  2. If you want to get online explanations, deploy the Model that you created in the preceding step to an Endpoint resource.

Get online explanations

To get online explanations, follow most of the same steps that you would to get online predictions. However, instead of sending a projects.locations.endpoints.predict request to the Vertex AI API, send a projects.locations.endpoints.explain request.

The following guides provide detailed instructions for preparing and sending online explanation requests:

Get batch explanations

Only feature-based batch explanations are supported; you cannot get example-based batch explanations.

To get batch explanations, set the generateExplanation field to true when you create a batch prediction job.

For detailed instructions about preparing and creating batch prediction jobs, read Getting batch predictions.

Get explanations locally in Vertex AI Workbench user-managed notebooks

In Vertex AI Workbench user-managed notebooks, you can generate explanations for your custom-trained model by running Vertex Explainable AI within your notebook's local kernel or runtime without deploying the model to Vertex AI to get explanations. Using local explanations lets you try out different Vertex Explainable AI settings without adjusting your Vertex AI model deployment for each change. This makes it easier and faster to evaluate the impact of using different baselines, trying different visualization settings for your explanations, or adjusting the number of steps or paths used for your algorithm.

Local explanations are available only within user-managed notebooks, therefore, this feature doesn't work in Jupyter notebooks that are run outside of a user-managed notebooks instance.

To generate explanations locally in a user-managed notebooks instance:

  • Create a user-managed notebooks instance
  • Launch the JupyterLab environment from your user-managed notebooks instance, and then create or import a notebook.
  • Save the model artifact to your notebook's local environment, or a Cloud Storage bucket.
  • Generate and save metadata to describe your model and configure your explanation request.

Get Concurrent Explanations

Explainable AI supports concurrent explanations. Concurrent explanations allow you to request both feature-based and example-based explanations from the same deployed model endpoint without having to deploy your model separately for each explanation method.

To get concurrent explanations, upload your model and configure either example-based or feature-based explanations. Then, deploy your model as usual.

After the model is deployed, you can request the configured explanations as usual. Additionally, you can request concurrent explanations by specifying concurrent_explanation_spec_override.

Note the following when using concurrent explanations:

  • Concurrent explanations are available using only the v1beta1 API version. If you're using the Vertex python SDK, you'll need to use the preview model to use concurrent explanations.
  • Example-based explanations cannot be requested after deploying with feature-based explanations. If you want both Example-based explanation and Feature-based explanations, deploy your model using Example-based explanations and request Feature-based using the concurrent explanation field.
  • Batch Explanations are not supported for Concurrent explanations. Online Explanations are the only way to use this feature.

Use the Explainable AI SDK in user-managed notebooks

The Explainable AI SDK is pre-installed in user-managed notebooks instances. Within your notebook, you can use the Explainable AI SDK to save your model artifact and automatically identify metadata about your model's inputs and outputs for the explanation request. You can also specify other parameters to configure your explanation request, and then visualize the explanation results.

You can save models and metadata either in your notebook's local environment, or in a Cloud Storage bucket. If you're using TensorFlow, you can use the save_model_with_metadata() method to infer your model's inputs and outputs, and save this explanation metadata with your model.

Next, load the model into the Explainable AI SDK using load_model_from_local_path(). If needed, you can adjust the configuration for the specific Vertex Explainable AI algorithm. For example, you can change the number of paths to use for Sampled Shapley, or the number of steps to use for integrated gradients or XRAI.

Finally, call explain() with instances of data, and visualize the feature attributions.

You can use the following example code to get local explanations for a TensorFlow 2 model within a user-managed notebooks instance:

# This sample code only works within a user-managed notebooks instance.
import explainable_ai_sdk
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder

metadata_and_model_builder = SavedModelMetadataBuilder('LOCAL_PATH_TO_MODEL')
metadata_and_model_builder.save_model_with_metadata('LOCAL_PATH_TO_SAVED_MODEL_ARTIFACT')

# Load the model and adjust the configuration for Explainable AI parameters
num_paths = 20
model_artifact_with_metadata = explainable_ai_sdk.load_model_from_local_path(
    'LOCAL_PATH_TO_SAVED_MODEL_ARTIFACT',
    explainable_ai_sdk.SampledShapleyConfig(num_paths))

# Explainable AI supports generating explanations for multiple predictions
instances = [{feature_a: 10, feature_2: 100,...}, ... ]
explanations = model_artifact_with_metadata.explain(instances)
explanations[0].visualize_attributions()

For more information about the Explainable AI SDK, including different configurations and parameters, see the SDK's config.py file on GitHub. Learn more about Vertex AI Workbench user-managed notebooks.

Troubleshooting

This section describes troubleshooting steps that you might find helpful if you run into problems with while getting explanations.

Error: list index out of range

If you get the following error message when requesting explanations:

"error": "Explainability failed with exception: listindex out of range"

Make sure that you are not passing an empty array into a field that expects an array of objects. For example, if field1 accepts an array of objects, the following request body might result in an error:

{
  "instances": [
    {
      "field1": [],
    }
  ]
}

Instead, make sure the array is not empty, for example:

{
  "instances": [
    {
      "field1": [
        {}
      ],
    }
  ]
}

What's next