To use Vertex Explainable AI with a custom-trained model, you must configure certain
options when you create the Model
resource that you plan to request
explanations from, when you deploy the model, or when you submit a batch
explanation job. This page describes configuring these options.
If you want to use Vertex Explainable AI with an AutoML tabular model, then you don't need to perform any configuration; Vertex AI automatically configures the model for Vertex Explainable AI. Skip this document and read Getting explanations.
When and where to configure explanations
You configure explanations when you create or import a model. You can also configure explanations on a model that you have already created, even if you didn't configuring explanations previously.
Configure explanations when creating or importing models
When you create or import a Model
, you can set a default configuration for all
its explanations using the Model
's explanationSpec
field.
You can create a custom-trained Model
in Vertex AI in the
following ways:
- Import or register a
Model
to the Vertex AI Model Registry - Create a custom
TrainingPipeline
resource that imports aModel
. - Create a BigQuery ML model and specify
the optional
model_registry
setting in theCREATE MODEL
syntax. This setting automatically registers the model to Vertex AI Model Registry and configures itsexplanationSpec
.
In either case, you can configure the Model
to support Vertex Explainable AI. The
examples in this document assume that you are importing
a Model
. To configure Vertex Explainable AI when you create a custom-trained Model
using a TrainingPipeline
, use the configuration settings described in this
document in the TrainingPipeline
's modelToUpload
field.
Configure explanations when deploying models or getting batch predictions
When you
deploy a Model
to an Endpoint
resource, you
can either:
- Configure explanations, whether or not the model was previously
configured for explanations. This is useful if you didn't originally plan to
get explanations (and omitted the
explanationSpec
field when you created the model), but decide later that you want explanations for the Model, or if you want to override some of the explanation settings. - Disable explanations. This is useful if your model is configured for
explanations, but you do not plan to get explanations from the endpoint. To
disable explanations when deploying the model to an endpoint, either uncheck
the Explainability options in the Cloud Console or set
DeployedModel.disableExplanations
totrue
.
Similarly, when you get batch predictions from a
Model
, you can either
configure explanations by populating the BatchPredictionJob.explanationSpec
field
or disable explanations by setting BatchPredictionJob.generateExplanation
to
false
.
Override the configuration when getting online explanations
Regardless of whether you created or imported the Model
with explanation
settings, and regardless of whether you configured explanation settings during
deployment, you can override the Model
's initial explanation settings when you
get online explanations.
When you send an explain
request to
Vertex AI, you can override some of the explanation configuration
that you previously set for the Model
or the DeployedModel
.
In the explain
request, you can override the following fields:
- Input baselines for any custom-trained model
- Visualization configuration for image models
ExplanationParameters
except for themethod
Override these settings in the explanation request's explanationSpecOverride field.
Import a model with an explanationSpec
field
Depending on whether you serve predictions using a prebuilt
container or a custom
container, specify slightly
different details for the ExplanationSpec
. Select the tab that matches the
container that you are using:
TensorFlow prebuilt container
You can use any of the following attribution methods for Vertex Explainable AI. Read the
comparison of feature attribution
methods to select the appropriate one for
your Model
:
Sampled Shapley
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses a default input baseline of all-zero values, which is a black image for image data.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
For TensorFlow 2,
ExplanationMetadata
is optional.Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
REST
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
For TensorFlow 2 models, the metadata
field is optional. If omitted, Vertex AI automatically infers the inputs
and outputs
from the model.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
Integrated Gradients
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Integrated gradients (for tabular models) or Integrated gradients (for image classification models), depending on which is more appropriate for your model.
If you are importing an image classification model, do the following:
Set the Visualization type and Color map.
You can leave the Clip below, Clip above, Overlay type, and Number of integral steps at their default settings.
Learn more about visualization settings.
Set the number of steps to use for approximating the path integral during feature attribution. This must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses a default input baseline of all-zero values, which is a black image for image data.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
For TensorFlow 2,
ExplanationMetadata
is optional.Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "MODALITY", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
-
MODALITY:
image
if theModel
accepts images as input ornumeric
if theModel
accepts tabular data as input. Defaults tonumeric
. -
VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
If you omit the
modality
field or set themodality
field tonumeric
, then omit thevisualization
field entirely. - OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=integrated-gradients \ --explanation-step-count=STEP_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.You can optionally add flags to configure the SmoothGrad approximation of gradients.
REST
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
-
MODALITY:
image
if theModel
accepts images as input ornumeric
if theModel
accepts tabular data as input. Defaults tonumeric
. -
VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
If you omit the
modality
field or set themodality
field tonumeric
, then omit thevisualization
field entirely. - OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
You can optionally add fields to configure the SmoothGrad approximation of
gradients to the
ExplanationParameters
.
For TensorFlow 2 models, the metadata
field is optional. If omitted, Vertex AI automatically infers the inputs
and outputs
from the model.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "integratedGradientsAttribution": { "stepCount": STEP_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "MODALITY", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
XRAI
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select XRAI (for image classification models).
Set the following visualization options:
Set the Color map.
You can leave the Clip below, Clip above, Overlay type, and Number of integral steps at their default settings.
Learn more about visualization settings.
Set the number of steps to use for approximating the path integral during feature attribution. This must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses a default input baseline of all-zero values, which is a black image for image data.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
For TensorFlow 2,
ExplanationMetadata
is optional.Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "image", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=xrai \ --explanation-step-count=STEP_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.You can optionally add flags to configure the SmoothGrad approximation of gradients.
REST
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
You can optionally add fields to configure the SmoothGrad approximation of
gradients to the
ExplanationParameters
.
For TensorFlow 2 models, the metadata
field is optional. If omitted, Vertex AI automatically infers the inputs
and outputs
from the model.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "xraiAttribution": { "stepCount": STEP_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "image", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
scikit-learn and XGBoost prebuilt containers
If your Model
accepts tabular data as input and serves predictions using a
pre-built scikit-learn or XGBoost container for
prediction, then you can configure it
to use the Sampled Shapley attribution
method for explanations.
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
Fill in your input feature name.
If your model artifacts do not include feature names, then Vertex AI is unable to map the specified input feature names to the model. In that case, you should provide only one input feature with any arbitrary, user-friendly name, such as
input_features
. In the explanation response, you will get an N dimensional list of attributions, where N is the number of features in the model and the elements in the list appear in the same order as the training dataset.Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses a default input baseline of all-zero values, which is a black image for image data.
Set the Output name of your feature.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { } }, "outputs": { "OUTPUT_NAME": { } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- OUTPUT_NAME: Any memorable name for the output of your model.
If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
REST
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: Any memorable name for your input feature.
- OUTPUT_NAME: Any memorable name for the output of your model.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { } }, "outputs": { "OUTPUT_NAME": { } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
Custom container
If your Model
accepts tabular data as input and serves predictions using a
custom container, then you can configure it
to use the Sampled Shapley attribution
method for explanations.
Determining feature and output names
In the following steps, you must provide Vertex AI with the names
of the features that your Model
expects as input. You must also specify the
key used for outputs in the Model
's predictions.
Determining feature names
If your Model
expects each input
instance to
have certain top-level keys, then those keys are your feature names.
For example, consider a Model
that expects each input instance to have the
following format:
{
"length": <value>,
"width": <value>
}
In this case, the feature names are length
and width
. Even if the values of
these fields contain nested lists or objects, length
and width
are the only
keys you need for the following steps. When you request
explanations, Vertex Explainable AI provides
attributions for each nested element of your features.
If your Model
expects unkeyed input, then Vertex Explainable AI considers the Model
to have a single feature. You can use any memorable string for the feature name.
For example, consider a Model
that expects each input instance to have the
following format:
[
<value>,
<value>
]
In this case, provide Vertex Explainable AI with a single feature name of your choosing,
like dimensions
.
Determining the output name
If your Model
returns each online prediction
instance with
keyed output, then that key is your output name.
For example, consider a Model
that returns each prediction in the following
format:
{
"scores": <value>
}
In this case, the output name is scores
. If the value of the scores
field is
an array, then when you get explanations, Vertex Explainable AI returns feature
attributions for the element with the highest value in each prediction. To
configure Vertex Explainable AI to provide feature attributions for additional elements
of the output field, you can specify the topK
or outputIndices
fields of
ExplanationParameters
.
However, the examples in this document do not demonstrate these options.
If your Model
returns unkeyed predictions, then you can use any memorable
string for the output name. For example, this applies if your Model
returns
an array or a scalar for each prediction.
Creating the Model
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
Fill in your input feature name.
If your model artifacts do not include feature names, then Vertex AI is unable to map the specified input feature names to the model. In that case, you should provide only one input feature with any arbitrary, user-friendly name, such as
input_features
. In the explanation response, you will get an N dimensional list of attributions, where N is the number of features in the model and the elements in the list appear in the same order as the training dataset.Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses a default input baseline of all-zero values, which is a black image for image data.
Set the Output name of your feature.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": {} }, "outputs": { "OUTPUT_NAME": {} } }
Replace the following:
- FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
- OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.-
REST
Before using any of the request data, make the following replacements:
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
- OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.
To learn about appropriate values for the other placeholders, see
upload
and Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": {} }, "outputs": { "OUTPUT_NAME": {} } } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content