Configuring explanations

To use Vertex Explainable AI with a custom-trained model, you must configure certain options when you create the Model resource that you plan to request explanations from. This page describes configuring these options.

If you want to use Vertex Explainable AI with an AutoML tabular model, then you don't need to perform any configuration; Vertex AI automatically configures the model for Vertex Explainable AI. Skip this document and read Getting explanations.

When and where to configure explanations

You configure explanations in the explanationSpec field when you create a Model. You can create a custom-trained Model in Vertex AI in two ways:

In either case, you can configure the Model to support Vertex Explainable AI. The examples in this document assume that you are importing a Model. To configure Vertex Explainable AI when you create a custom-trained Model using a TrainingPipeline, use the configuration settings described in this document in the TrainingPipeline's modelToUpload field.

Overriding the configuration

In order to use Vertex Explainable AI, you must configure the explanationSpec field when you create a Model. However, you can also override these explanation settings after you create the Model in the following ways:

Importing a model with an explanationSpec field

Depending on whether you serve predictions using a TensorFlow pre-built container or a custom container, specify slightly different details for the ExplanationSpec. Select the tab that matches the container that you are using:

TensorFlow pre-built container

You can use any of the following attribution methods for Vertex Explainable AI. Read the comparison of feature attribution methods to select the appropriate one for your Model:

Sampled Shapley

Depending on which tool you want to use to create the Model, select one of the following tabs:

gcloud

  1. Write the following ExplanationMetadata to a JSON file in your local environment. The filename does not matter, but for this example call the file explanation-metadata.json:

    explanation-metadata.json

    {
      "inputs": {
        "FEATURE_NAME": {
          "inputTensorName": "FEATURE_TENSOR_NAME",
        }
      },
      "outputs": {
        "OUTPUT_NAME": {
          "outputTensorName": "OUTPUT_TENSOR_NAME"
        }
      }
    }
    

    Replace the following:

    • FEATURE_NAME: Any memorable name for your input feature.
    • FEATURE_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
    • OUTPUT_NAME: Any memorable name for the output of your model.
    • OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.

    You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

  2. Run the following command to create a Model resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.

    gcloud beta ai models upload \
      --region=LOCATION \
      --display-name=MODEL_NAME \
      --container-image-uri=IMAGE_URI \
      --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \
      --explanation-method=sampled-shapley \
      --explanation-path-count=PATH_COUNT \
      --explanation-metadata-file=explanation-metadata.json
    

    Replace the following:

    To learn about appropriate values for the other placeholders, read Importing models.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

To learn about appropriate values for the other placeholders, read Importing models.

You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

Request JSON body:

{
  "model": {
    "displayName": "MODEL_NAME",
    "containerSpec": {
      "imageUri": "IMAGE_URI"
    },
  "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY",
  "explanationSpec": {
    "parameters": {
      "sampledShapleyAttribution": {
        "pathCount": PATH_COUNT
      }
    },
    "metadata": {
       "inputs": {
         "FEATURE_NAME": {
           "inputTensorName": "FEATURE_TENSOR_NAME",
         }
       },
       "outputs": {
         "OUTPUT_NAME": {
           "outputTensorName": "OUTPUT_TENSOR_NAME"
         }
       }
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content

Integrated Gradients

Depending on which tool you want to use to create the Model, select one of the following tabs:

gcloud

  1. Write the following ExplanationMetadata to a JSON file in your local environment. The filename does not matter, but for this example call the file explanation-metadata.json:

    explanation-metadata.json

    {
      "inputs": {
        "FEATURE_NAME": {
          "inputTensorName": "FEATURE_TENSOR_NAME",
          "modality": "MODALITY",
          "visualization": VISUALIZATION_SETTINGS
        }
      },
      "outputs": {
        "OUTPUT_NAME": {
          "outputTensorName": "OUTPUT_TENSOR_NAME"
        }
      }
    }
    

    Replace the following:

    • FEATURE_NAME: Any memorable name for your input feature.
    • FEATURE_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
    • MODALITY: image if the Model accepts images as input or numeric if the Model accepts tabular data as input. Defaults to numeric.
    • VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.

      If you omit the modality field or set the modality field to numeric, then omit the visualization field entirely.

    • OUTPUT_NAME: Any memorable name for the output of your model.
    • OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.

    You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

  2. Run the following command to create a Model resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.

    gcloud beta ai models upload \
      --region=LOCATION \
      --display-name=MODEL_NAME \
      --container-image-uri=IMAGE_URI \
      --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \
      --explanation-method=integrated-gradients \
      --explanation-step-count=STEP_COUNT \
      --explanation-metadata-file=explanation-metadata.json
    

    Replace the following:

    To learn about appropriate values for the other placeholders, read Importing models.

    You can optionally add flags to configure the SmoothGrad approximation of gradients.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
  • STEP_COUNT: The number of steps to use for approximating the path integral during feature attribution. Must be an integer in the range [1, 100].

    A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try 50.

  • FEATURE_NAME: Any memorable name for your input feature.
  • FEATURE_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
  • MODALITY: image if the Model accepts images as input or numeric if the Model accepts tabular data as input. Defaults to numeric.
  • VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.

    If you omit the modality field or set the modality field to numeric, then omit the visualization field entirely.

  • OUTPUT_NAME: Any memorable name for the output of your model.
  • OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.

To learn about appropriate values for the other placeholders, read Importing models.

You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

You can optionally add fields to configure the SmoothGrad approximation of gradients to the ExplanationParameters.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

Request JSON body:

{
  "model": {
    "displayName": "MODEL_NAME",
    "containerSpec": {
      "imageUri": "IMAGE_URI"
    },
  "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY",
  "explanationSpec": {
    "parameters": {
      "integratedGradientsAttribution": {
        "stepCount": STEP_COUNT
      }
    },
    "metadata": {
       "inputs": {
         "FEATURE_NAME": {
           "inputTensorName": "FEATURE_TENSOR_NAME",
           "modality": "MODALITY",
           "visualization": VISUALIZATION_SETTINGS
         }
       },
       "outputs": {
         "OUTPUT_NAME": {
           "outputTensorName": "OUTPUT_TENSOR_NAME"
         }
       }
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content

XRAI

Depending on which tool you want to use to create the Model, select one of the following tabs:

gcloud

  1. Write the following ExplanationMetadata to a JSON file in your local environment. The filename does not matter, but for this example call the file explanation-metadata.json:

    explanation-metadata.json

    {
      "inputs": {
        "FEATURE_NAME": {
          "inputTensorName": "FEATURE_TENSOR_NAME",
          "modality": "image",
          "visualization": VISUALIZATION_SETTINGS
        }
      },
      "outputs": {
        "OUTPUT_NAME": {
          "outputTensorName": "OUTPUT_TENSOR_NAME"
        }
      }
    }
    

    Replace the following:

    You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

  2. Run the following command to create a Model resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.

    gcloud beta ai models upload \
      --region=LOCATION \
      --display-name=MODEL_NAME \
      --container-image-uri=IMAGE_URI \
      --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \
      --explanation-method=xrai \
      --explanation-step-count=STEP_COUNT \
      --explanation-metadata-file=explanation-metadata.json
    

    Replace the following:

    To learn about appropriate values for the other placeholders, read Importing models.

    You can optionally add flags to configure the SmoothGrad approximation of gradients.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

To learn about appropriate values for the other placeholders, read Importing models.

You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

You can optionally add fields to configure the SmoothGrad approximation of gradients to the ExplanationParameters.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

Request JSON body:

{
  "model": {
    "displayName": "MODEL_NAME",
    "containerSpec": {
      "imageUri": "IMAGE_URI"
    },
  "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY",
  "explanationSpec": {
    "parameters": {
      "xraiAttribution": {
        "stepCount": STEP_COUNT
      }
    },
    "metadata": {
       "inputs": {
         "FEATURE_NAME": {
           "inputTensorName": "FEATURE_TENSOR_NAME",
           "modality": "image",
           "visualization": VISUALIZATION_SETTINGS
         }
       },
       "outputs": {
         "OUTPUT_NAME": {
           "outputTensorName": "OUTPUT_TENSOR_NAME"
         }
       }
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content

Custom container

If your Model accepts tabular data as input and serves predictions using a custom container, then you can configure it to use the Sampled Shapley attribution method for explanations.

Determining feature and output names

In the following steps, you must provide Vertex AI with the names of the features that your Model expects as input. You must also specify the key used for outputs in the Model's predictions.

Determining feature names

If your Model expects each input instance to have certain top-level keys, then those keys are your feature names.

For example, consider a Model that expects each input instance to have the following format:

{
  "length": <value>,
  "width": <value>
}

In this case, the feature names are length and width. Even if the values of these fields contain nested lists or objects, length and width are the only keys you need for the following steps. When you request explanations, Vertex Explainable AI provides attributions for each nested element of your features.

If your Model expects unkeyed input, then Vertex Explainable AI considers the Model to have a single feature. You can use any memorable string for the feature name.

For example, consider a Model that expects each input instance to have the following format:

[
  <value>,
  <value>
]

In this case, provide Vertex Explainable AI with a single feature name of your choosing, like dimensions.

Determining the output name

If your Model returns each online prediction instance with keyed output, then that key is your output name.

For example, consider a Model that returns each prediction in the following format:

{
  "scores": <value>
}

In this case, the output name is scores. If the value of the scores field is an array, then when you get explanations, Vertex Explainable AI returns feature attributions for the element with the highest value in each prediction. To configure Vertex Explainable AI to provide feature attributions for additional elements of the output field, you can specify the topK or outputIndices fields of ExplanationParameters. However, the examples in this document do not demonstrate these options.

If your Model returns unkeyed predictions, then you can use any memorable string for the output name. For example, this applies if your Model returns an array or a scalar for each prediction.

Creating the Model

Depending on which tool you want to use to create the Model, select one of the following tabs:

gcloud

  1. Write the following ExplanationMetadata to a JSON file in your local environment. The filename does not matter, but for this example call the file explanation-metadata.json:

    explanation-metadata.json

    {
      "inputs": {
        "FEATURE_NAME": {}
      },
      "outputs": {
        "OUTPUT_NAME": {}
      }
    }
    

    Replace the following:

    • FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
    • OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.

    You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

  2. Run the following command to create a Model resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.

    gcloud beta ai models upload \
      --region=LOCATION \
      --display-name=MODEL_NAME \
      --container-image-uri=IMAGE_URI \
      --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \
      --explanation-method=sampled-shapley \
      --explanation-path-count=PATH_COUNT \
      --explanation-metadata-file=explanation-metadata.json
    

    Replace the following:

    To learn about appropriate values for the other placeholders, read Importing models.

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range [1, 50].

    A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try 25.

  • FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
  • OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.

To learn about appropriate values for the other placeholders, read Importing models.

You can optionally add input baselines to the ExplanationMetadata. Otherwise, Vertex AI chooses input baselines for the Model.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

Request JSON body:

{
  "model": {
    "displayName": "MODEL_NAME",
    "containerSpec": {
      "imageUri": "IMAGE_URI"
    },
  "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY",
  "explanationSpec": {
    "parameters": {
      "sampledShapleyAttribution": {
        "pathCount": PATH_COUNT
      }
    },
    "metadata": {
       "inputs": {
         "FEATURE_NAME": {}
       },
       "outputs": {
         "OUTPUT_NAME": {}
       }
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content

What's next