Performs explanation on the data in the request.
AI Explanations implements a custom explain
verb on top of an HTTP POST method.
The explain
method performs prediction on the data in the request.
The URL is described in Google API HTTP annotation syntax:
POST https://ml.googleapis.com/v1/{name=projects/**}:explain
The name
parameter is required. It must contain the name of your model and,
optionally, a version. If you specify a model without a version, the default
version for that model is used.
Example specifying both model and version:
POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME/versions/YOUR_VERSION_NAME:explain
Example specifying only a model. The default version for that model is used:
POST https://ml.googleapis.com/v1/projects/YOUR_PROJECT_ID/models/YOUR_MODEL_NAME:explain
This page describes the format of the explanation request body and of the response body. For a code sample showing how to send an explanation request, see the guide to using feature attributions.
Request body details
TensorFlow
The request body contains data with the following structure (JSON representation):
{
"instances": [
<value>|<simple/nested list>|<object>,
...
]
}
The instances[]
object is required, and must contain the list of instances
to get explanations for.
The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs (as objects) or can contain only unlabeled values.
Not all data includes named inputs. Some instances are simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists.
Below are some examples of request bodies.
CSV data with each row encoded as a string value:
{"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]}
Plain text:
{"instances": ["the quick brown fox", "the lazy dog"]}
Sentences encoded as lists of words (vectors of strings):
{ "instances": [ ["the","quick","brown"], ["the","lazy","dog"], ... ] }
Floating point scalar values:
{"instances": [0.0, 1.1, 2.2]}
Vectors of integers:
{ "instances": [ [0, 1, 2], [3, 4, 5], ... ] }
Tensors (in this case, two-dimensional tensors):
{ "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] }
Images, which can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third dimension contains lists (vectors) of the R, G, and B values for each pixel:
{ "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] }
Data encoding
JSON strings must be encoded as UTF-8. To send binary data, you must
base64-encode the data and mark it as binary. To mark a JSON string as
binary, replace it with a JSON object with a single attribute named
b64
:
{"b64": "..."}
The following example shows two serialized tf.Examples
instances, requiring base64 encoding (fake data, for illustrative
purposes only):
{"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]}
The following example shows two JPEG image byte strings, requiring base64 encoding (fake data, for illustrative purposes only):
{"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]}
Multiple input tensors
Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, use the names of JSON name/value pairs to identify the input tensors.
For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string):
{ "instances": [ { "tag": "beach", "image": {"b64": "ASa8asdf"} }, { "tag": "car", "image": {"b64": "JLK7ljk3"} } ] }
For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints):
{ "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] }
Explanation metadata
When using AI Explanations, you need to indicate which of your tensors
correspond to your actual features and output probabilities or predictions. You
do this by adding a file named explanation_metadata.json
to your SavedModel
folder before deploying the model to AI Explanations.
To make this process easier, assign a name to the tensors in your TensorFlow
graph. Before training your model, set the name
property in either
raw tensors or Keras layers:
auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')
The file contents should match this schema:
{
"inputs": {
string <input feature key>: {
"input_tensor_name": string,
"input_baselines": [
number,
],
"modality": string
}
...
},
"outputs": {
string <output value key>: {
"output_tensor_name": string
},
...
},
"framework": string
}
Fields | |
---|---|
output value key and input feature key |
Any unique name. The system outputs a dictionary with the attribution scores for a given feature listed under this key. |
input_tensor_name |
Required. The name of the tensor containing the inputs
that the model's prediction should be attributed to.
Format the name as |
input_baselines |
<integer>|<simple list>
Optional. The value of the baselines or
"uninformative" example for this particular feature. Consider using
the average or 0. |
modality |
Optional.
Can be set to
The tensor specified by
|
output_tensor_name |
Required. The name of the tensor containing the outputs that the model's prediction should be attributed to. |
framework |
Required. Must be set to |
Configure visualization settings
When you get explanations on image data with the integrated gradients or XRAI
methods, you can configure visualization settings for your results by including
them in your explanation_metadata.json
file. Configuring these settings is
optional.
To configure your visualiztion settings, include the visualization config within the input object you want to visualize:
{
"inputs": {
string <input feature key>: {
"input_tensor_name": string,
"input_baselines": [
number,
],
"modality": string
"visualization": {
"type": string,
"polarity": string,
"clip_below_percentile": number,
"clip_above_percentile": number,
"color_map": string,
"overlay_type": string
}
}
...
},
"outputs": {
string <output value key>: {
"output_tensor_name": string
},
...
},
"framework": string
}
Details on visualization settings
All of the visualization settings are optional, so you can specify all, some, or none of these values.
"visualization": {
"type": string,
"polarity": string,
"clip_below_percentile": number,
"clip_above_percentile": number,
"color_map": string,
"overlay_type": string
}
See example configurations and output images.
Fields | |
---|---|
type |
Optional. The type of visualization. Valid values
are |
polarity |
Optional. The directionality of the attribution
values displayed. Valid values are |
clip_below_percentile |
Optional.. Excludes attributions below the specified
percentile. Valid value is a decimal in the range
|
clip_above_percentile |
Optional. Excludes attributions above the specified
percentile. Valid value is a decimal in the range
|
color_map |
Optional. Valid values are |
overlay_type |
Optional. The type of overlay modifying how the
attributions are displayed over the original input images.
Defaults to |
Response body details
Responses are very similar to requests.
If the call is successful, the response body contains one explanations entry per instance in the request body, given in the same order:
{
"explanations": [
{
object
}
]
}
If explanation fails for any instance, the response body contains no explanations. Instead, it contains a single error entry:
{
"error": string
}
The explanations[]
object contains the list of explanations, one for each
instance in the request.
On error, the error
string contains a message describing the problem. The
error is returned instead of an explanations list if an error occurred while
processing any instance.
Even though there is one explanation per instance, the format of an explanation is not directly related to the format of an instance. Explanations take whatever format is specified in the outputs collection defined in the model. The collection of explanations is returned in a JSON list. Each member of the list can be a simple value, a list, or a JSON object of any complexity. If your model has more than one output tensor, each explanation will be a JSON object containing a name/value pair for each output. The names identify the output aliases in the graph.
Response body examples
The following explanations response is for an individual feature attribution on tabular data. It is part of the example notebook for tabular data. The notebook demonstrates how to parse explanations responses and plot the attributions data.
Feature attributions appear within the attributions_by_label
object:
{
"explanations": [
{
"attributions_by_label": [
{
"approx_error": 0.001017811509478243,
"attributions": {
"data": [
-0.0,
1.501250445842743,
4.4058547498107075,
0.016078486742916454,
-0.03749384209513669,
-0.0,
-0.2621846305120581,
-0.0,
-0.0,
0.0
]
...
},
"baseline_score": 14.049912452697754,
"example_score": 19.667699813842773,
"label_index": 0,
"output_name": "duration"
}
...
]
}
]
}
approx_error
is an approximation error for the feature attributions. Feature attributions are based on an approximation of Shapley values. Learn more about the approximation error.- The
attributions
object contains a key-value pair for each input feature you requested explanations for.- For each input feature, the key is the same as the input feature key you set
in your
explanation_metadata.json
file. In this example, it's "data". - The values are the attributions for each feature. The shape of the attributions matches the input tensor. In this example, it is a list of scalar values. Learn more about these attribution values.
- For each input feature, the key is the same as the input feature key you set
in your
- The
baseline_score
is the model output for the baseline you set. Depending on your model, you can set the baseline to zero, a random value, or median values. Learn more about selecting baselines. - The
example_score
is the prediction for the input instance you provided. In this example,example_score
is the predicted duration of a rideshare bike trip in minutes. In general, theexample_score
for any given instance is the prediction for that instance. - The
label_index
is the index in the output tensor that is being explained. - The
output_name
is the same as the output feature key you set in yourexplanation_metadata.json
file. In this example, it's "duration".
Individual attribution values
This table shows more details about the features that correspond to these
example attribution values. Positive attribution values increase the predicted
value by that amount, and negative attribution values decrease the predicted
value. The euclidean
distance between the start and end locations of the bike
trip had the strongest effect on the predicted bike trip duration
(19.667 minutes), increasing it by 4.498.
Feature name | Feature value | Attribution value |
---|---|---|
start_hr | 19 | 0 |
weekday | 1 | -0.0661425 |
euclidean | 3920.76 | 4.49809 |
temp | 52.5 | 0.0564195 |
dew_point | 38.8 | -0.072438 |
wdsp | 0 | 0 |
max_temp | 64.8 | -0.226125 |
fog | 0 | -0 |
prcp | 0.06 | -0 |
rain_drizzle | 0 | 0 |
Learn more by trying the example notebook for tabular data.
API specification
The following section describes the specification of the explain
method as
defined in the AI Platform Training and Prediction API discovery document. Refer to the previous sections
of this document for detailed information about the method.
HTTP request
POST https://{endpoint}/v1/{name=projects/**}:explain
Where {endpoint}
is one of the supported service endpoints.
The URLs use gRPC Transcoding syntax.
Path parameters
Parameters | |
---|---|
name |
Required. The resource name of a model or a version. Authorization: requires the Authorization requires one or more of the following IAM permissions on the specified resource
|
Request body
The request body contains data with the following structure:
JSON representation | |
---|---|
{
"httpBody": {
object ( |
Fields | |
---|---|
httpBody |
Required. The explanation request body. |
Response body
If successful, the response is a generic HTTP response whose format is defined by the method.
Authorization Scopes
Requires the following OAuth scope:
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview.