To use Vertex Explainable AI with a custom-trained model, you must configure certain
options when you create the Model
resource that you plan to request
explanations from, when you deploy the model, or when you submit a batch
explanation job. This page describes configuring these options.
If you want to use Vertex Explainable AI with an AutoML tabular model, then you don't need to perform any configuration; Vertex AI automatically configures the model for Vertex Explainable AI. Skip this document and read Getting explanations.
When and where to configure explanations
You configure explanations when you create or import a model. You can also configure explanations on a model that you have already created, even if you didn't configuring explanations previously.
Configure explanations when creating or importing models
When you create or import a Model
, you can set a default configuration for all
its explanations using the Model
's explanationSpec
field.
You can create a custom-trained Model
in Vertex AI in two ways:
In either case, you can configure the Model
to support Vertex Explainable AI. The
examples in this document assume that you are importing
a Model
. To configure Vertex Explainable AI when you create a custom-trained Model
using a TrainingPipeline
, use the configuration settings described in this
document in the TrainingPipeline
's modelToUpload
field.
Configure explanations when deploying models or getting batch predictions
When you
deploy a Model
to an Endpoint
resource, it
creates a DeployedModel
. You can set a default explanation configuration for
a DeployedModel
by populating its explanationSpec
field.
You can use this step to override configuration set when you created the
Model
.
Similarly, when you get batch predictions from a
Model
and request explanations
as part of your batch prediction request, you can override some
or all of the Model
's explanation configuration by populating the
BatchPredictionJob
resource's explanationSpec
field.
These options are useful if you didn't originally plan to get explanations (an
omitted the explanationSpec
field when you created the model), but decide
later that you want explanations for the Model.
Override the configuration when getting online explanations
Regardless of whether you created or imported the Model
with explanation
settings, and regardless of whether you configured explanation settings during
deployment, you can override the Model
's initial explanation settings when you
get online explanations.
When you send an explain
request to
Vertex AI, you can override some of the explanation configuration
that you previously set for the Model
or the DeployedModel
.
In the explain
request, you can override the following fields:
- Input baselines for any custom-trained model
- Visualization configuration for image models
ExplanationParameters
except for themethod
Override these settings in the explanation request's explanationSpecOverride field.
Import a model with an explanationSpec
field
Depending on whether you serve predictions using a pre-built
container or a custom
container, specify slightly
different details for the ExplanationSpec
. Select the tab that matches the
container that you are using:
TensorFlow pre-built container
You can use any of the following attribution methods for Vertex Explainable AI. Read the
comparison of feature attribution
methods to select the appropriate one for
your Model
:
Sampled Shapley
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses input baselines for your model.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, read Importing models.
REST & CMD LINE
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
Integrated Gradients
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Integrated gradients (for tabular models) or Integrated gradients (for image classification models), depending on which is more appropriate for your model.
If you are importing an image classification model, do the following:
Set the Visualization type and Color map.
You can leave the Clip below, Clip above, Overlay type, and Number of integral steps at their default settings.
Learn more about visualization settings.
Set the number of steps to use for approximating the path integral during feature attribution. This must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses input baselines for your model.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "MODALITY", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
-
MODALITY:
image
if theModel
accepts images as input ornumeric
if theModel
accepts tabular data as input. Defaults tonumeric
. -
VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
If you omit the
modality
field or set themodality
field tonumeric
, then omit thevisualization
field entirely. - OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=integrated-gradients \ --explanation-step-count=STEP_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add flags to configure the SmoothGrad approximation of gradients.
REST & CMD LINE
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
-
MODALITY:
image
if theModel
accepts images as input ornumeric
if theModel
accepts tabular data as input. Defaults tonumeric
. -
VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
If you omit the
modality
field or set themodality
field tonumeric
, then omit thevisualization
field entirely. - OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
You can optionally add fields to configure the SmoothGrad approximation of
gradients to the
ExplanationParameters
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "integratedGradientsAttribution": { "stepCount": STEP_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "MODALITY", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
XRAI
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select XRAI (for image classification models).
Set the following visualization options:
Set the Color map.
You can leave the Clip below, Clip above, Overlay type, and Number of integral steps at their default settings.
Learn more about visualization settings.
Set the number of steps to use for approximating the path integral during feature attribution. This must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses input baselines for your model.
-
If you're importing a TensorFlow model, there are additional input fields:
Fill out the Input tensor name.
If applicable, fill in the Indices tensor name and/or the Dense shape tensor name.
The Modality cannot be updated here. It is set automatically to
NUMERIC
for tabular models, orIMAGE
for image models.If applicable, set the Encoding field. This defaults to
IDENTITY
if not set.If applicable, set the Group name field.
-
If you're importing a TensorFlow model, specify output fields:
- Set the Output name of your feature.
- Set the Output tensor name of your feature.
- If applicable, set the Index display name mapping.
- If applicable, set the Display name mapping key.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "image", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=xrai \ --explanation-step-count=STEP_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add flags to configure the SmoothGrad approximation of gradients.
REST & CMD LINE
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a TensorFlow pre-built container for serving predictions.
-
STEP_COUNT: The number of steps to use for approximating the path
integral during feature attribution. Must be an integer in the range
[1, 100]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
50
. - FEATURE_NAME: Any memorable name for your input feature.
- INPUT_TENSOR_NAME: The name of the input tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
- VIZUALIZATION_OPTIONS: Options for visualizing explanations. To learn how to populate this field, read Configuring visualization settings for image data.
- OUTPUT_NAME: Any memorable name for the output of your model.
- OUTPUT_TENSOR_NAME: The name of the output tensor in TensorFlow. To find the correct value for this field, read Using TensorFlow with Vertex Explainable AI.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
You can optionally add fields to configure the SmoothGrad approximation of
gradients to the
ExplanationParameters
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "xraiAttribution": { "stepCount": STEP_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { "inputTensorName": "INPUT_TENSOR_NAME", "modality": "image", "visualization": VISUALIZATION_SETTINGS } }, "outputs": { "OUTPUT_NAME": { "outputTensorName": "OUTPUT_TENSOR_NAME" } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
scikit-learn and XGBoost pre-built containers
If your Model
accepts tabular data as input and serves predictions using a
pre-built scikit-learn or XGBoost container for
prediction, then you can configure it
to use the Sampled Shapley attribution
method for explanations.
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses input baselines for your model.
-
Set the Output name of your feature.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": { } }, "outputs": { "OUTPUT_NAME": { } } }
Replace the following:
- FEATURE_NAME: Any memorable name for your input feature.
- OUTPUT_NAME: Any memorable name for the output of your model.
If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
- IMAGE_URI: The URI of a pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, read Importing models.
REST & CMD LINE
Before using any of the request data, make the following replacements:
- IMAGE_URI: The URI of a pre-built container for serving predictions.
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: Any memorable name for your input feature.
- OUTPUT_NAME: Any memorable name for the output of your model.
To learn about appropriate values for the other placeholders, read Importing models.
If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": { } }, "outputs": { "OUTPUT_NAME": { } } } } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content
Custom container
If your Model
accepts tabular data as input and serves predictions using a
custom container, then you can configure it
to use the Sampled Shapley attribution
method for explanations.
Determining feature and output names
In the following steps, you must provide Vertex AI with the names
of the features that your Model
expects as input. You must also specify the
key used for outputs in the Model
's predictions.
Determining feature names
If your Model
expects each input
instance to
have certain top-level keys, then those keys are your feature names.
For example, consider a Model
that expects each input instance to have the
following format:
{
"length": <value>,
"width": <value>
}
In this case, the feature names are length
and width
. Even if the values of
these fields contain nested lists or objects, length
and width
are the only
keys you need for the following steps. When you request
explanations, Vertex Explainable AI provides
attributions for each nested element of your features.
If your Model
expects unkeyed input, then Vertex Explainable AI considers the Model
to have a single feature. You can use any memorable string for the feature name.
For example, consider a Model
that expects each input instance to have the
following format:
[
<value>,
<value>
]
In this case, provide Vertex Explainable AI with a single feature name of your choosing,
like dimensions
.
Determining the output name
If your Model
returns each online prediction
instance with
keyed output, then that key is your output name.
For example, consider a Model
that returns each prediction in the following
format:
{
"scores": <value>
}
In this case, the output name is scores
. If the value of the scores
field is
an array, then when you get explanations, Vertex Explainable AI returns feature
attributions for the element with the highest value in each prediction. To
configure Vertex Explainable AI to provide feature attributions for additional elements
of the output field, you can specify the topK
or outputIndices
fields of
ExplanationParameters
.
However, the examples in this document do not demonstrate these options.
If your Model
returns unkeyed predictions, then you can use any memorable
string for the output name. For example, this applies if your Model
returns
an array or a scalar for each prediction.
Creating the Model
Depending on which tool you want to use to create or import the Model
, select
one of the following tabs:
Console
Follow the guide to importing a model using the Google Cloud console. When you get to the Explainability step, do the following:
For your feature attribution method, select Sampled Shapely (for tabular models).
Set the path count to the number of feature permutations to use for the Sampled Shapley attribution method. This must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.Configure each input feature in your model:
-
Fill in your input feature name.
-
Optionally, you can add one or more input baselines. Otherwise, Vertex Explainable AI chooses input baselines for your model.
-
Set the Output name of your feature.
Click the Import button when you have finished configuring the explainability settings.
gcloud
Write the following
ExplanationMetadata
to a JSON file in your local environment. The filename does not matter, but for this example call the fileexplanation-metadata.json
:explanation-metadata.json
{ "inputs": { "FEATURE_NAME": {} }, "outputs": { "OUTPUT_NAME": {} } }
Replace the following:
- FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
- OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.
You can optionally add input baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines for theModel
.If you specify input baselines, make sure they match your model's input, usually a list of 2-D matrices. Otherwise, the default value for the input baseline is a 0-value 2-D matrix of the input shape.
Run the following command to create a
Model
resource that supports Vertex Explainable AI. The flags most pertinent to Vertex Explainable AI are highlighted.gcloud ai models upload \ --region=LOCATION \ --display-name=MODEL_NAME \ --container-image-uri=IMAGE_URI \ --artifact-uri=PATH_TO_MODEL_ARTIFACT_DIRECTORY \ --explanation-method=sampled-shapley \ --explanation-path-count=PATH_COUNT \ --explanation-metadata-file=explanation-metadata.json
Replace the following:
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
.
To learn about appropriate values for the other placeholders, read Importing models.
-
REST & CMD LINE
Before using any of the request data, make the following replacements:
-
PATH_COUNT: The number of feature permutations to use for the Sampled Shapley attribution method. Must be an integer in the range
[1, 50]
.A higher value might reduce approximation error but is more computationally intensive. If you don't know what value to use, try
25
. - FEATURE_NAME: The name of the feature, as described in the "Determining feature names" section of this document.
- OUTPUT_NAME: The name of the output, as described in the "Determining the output name" section of this document.
To learn about appropriate values for the other placeholders, read Importing models.
You can optionally add input
baselines to the
ExplanationMetadata
. Otherwise, Vertex AI chooses input baselines
for the Model
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload
Request JSON body:
{ "model": { "displayName": "MODEL_NAME", "containerSpec": { "imageUri": "IMAGE_URI" }, "artifactUri": "PATH_TO_MODEL_ARTIFACT_DIRECTORY", "explanationSpec": { "parameters": { "sampledShapleyAttribution": { "pathCount": PATH_COUNT } }, "metadata": { "inputs": { "FEATURE_NAME": {} }, "outputs": { "OUTPUT_NAME": {} } } } }
To send your request, choose one of these options:
curl
Save the request body in a file called request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload"
PowerShell
Save the request body in a file called request.json
,
and execute the following command:
$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/models:upload" | Select-Object -Expand Content