Using AI Explanations with AI Platform Prediction

This is a general guide on how to deploy and use a model on AI Platform Prediction with AI Explanations.

Before you begin

You must do several things before you can train and deploy a model in AI Platform:

  • Set up your local development environment.
  • Set up a GCP project with billing and the necessary APIs enabled.
  • Create a Cloud Storage bucket to store your training package and your trained model.

To set up your GCP project, follow the instructions provided in the sample notebooks.

Saving a model

If you build and train your model in Keras, you must convert your model to a TensorFlow Estimator, and then export it to a SavedModel. This section focuses on saving a model. For a full working example, see the example notebooks.

After you build, compile, train, and evaluate your Keras model, you have to:

  • Convert the Keras model to a TensorFlow Estimator, using tf.keras.estimator.model_to_estimator
  • Provide a serving input function, using tf.estimator.export.build_raw_serving_input_receiver_fn
  • Export the model as a SavedModel, using tf.estimator.export_saved_model
# Build, compile, train, and evaluate your Keras model
model = tf.keras.Sequential(...)
model.compile(...)
model.fit(...)
model.predict(...)

## Convert your Keras model to an Estimator
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')

## Define a serving input function appropriate for your model
def serving_input_receiver_fn():
  ...
  return tf.estimator.export.ServingInputReceiver(...)

## Export the SavedModel to Cloud Storage, using your serving input function
export_path = keras_estimator.export_saved_model(
  'gs://' + 'YOUR_BUCKET_NAME',
  serving_input_receiver_fn
).decode('utf-8')

print("Model exported to: ", export_path)

Do not use the following:

  • TensorFlow 2.x
  • tf.saved_model.save

Explanation metadata file

You must submit a metadata file with information about your inputs, outputs and baseline, so that AI Explanations provides explanations for the correct parts of your model.

  • Identify the names of the input and output tensors you want explanations for. Learn more about how to identify your input and output tensors.
  • Pass your baseline to input_baselines.
  • Specify "tensorflow" for your framework.
  • Name the file explanation_metadata.json.
  • Upload your explanation_metadata.json file to the same Cloud Storage bucket where your SavedModel is stored.

The following example shows what an explanation_metadata.json file looks like:

{
    "inputs": {
      "data": {
        "input_tensor_name": "YOUR_INPUT_TENSOR_NAME"
        "input_baselines": [0]
      }
    },
    "outputs": {
      "duration": {
        "output_tensor_name": "YOUR_OUTPUT_TENSOR_NAME"
      }
    },
    "framework": "tensorflow"
}

In this example, "data" and "duration" are meaningful names for the input and output tensors that you can assign in the process of building and training the model. The actual input and output tensor names follow the format name:index. For example, x:0 or Placeholder:0.

Locating input and output tensor names

The best way to find your input and output tensor names depends on the types of your input and output data, as well as how you build your model. For a more in-depth explanation of each case, as well as examples, refer to the guide on preparing metadata for explanations.

Input data type(s) Output data type Other criteria Recommended approach(es)
Numeric or string Numeric Inputs are not in serialized form. Outputs are not numeric data treated as categorical data (for example, numeric class IDs). Use the SavedModel CLI to find the names of your input and output tensors. Alternatively, build the explanation metadata file while training and saving the model, where your program or environment still has access to the training code.
Any serialized data Any Add a TensorFlow parsing operation to your serving input function when you export the model. Use the output of the parsing operation to help identify input tensors.
Any Any Model includes preprocessing operations To get the names of the input tensors after the preprocessing steps, use the name property of tf.Tensor to get the input tensor names.
Any Not probabilities or logits You want to get explanations for outputs that are not probabilities or logits. Inspect your graph with TensorBoard to find the correct output tensors.
Any non-differentiable data Any You want to use integrated gradients Encode non-differentiable inputs as differentiable tensors. Add the names of both the original input tensor and the encoded input tensor to your explanation metadata file.

Deploy models and versions

AI Platform organizes your trained models using model and version resources. An AI Platform model is a container for the versions of your machine learning model.

To deploy a model, you create a model resource in AI Platform, create a version of that model, then link the model version to the model file stored in Cloud Storage.

Create a model resource

AI Platform uses model resources to organize different versions of your model.

Create a model resource for your model versions, filling in your desired name for your model without the enclosing brackets:

gcloud ai-platform models create "[YOUR-MODEL-NAME]"

See the AI Platform model API for more details.

Create a model version

Now you are ready to create a model version with the trained model you previously uploaded to Cloud Storage. When you create a version, specify the following parameters:

  • name: must be unique within the AI Platform model.
  • deploymentUri: the path to your SavedModel directory in Cloud Storage.
  • framework: TENSORFLOW only
  • runtimeVersion: use runtime version 1.14.
  • pythonVersion: must be set to "3.5" to be compatible with model files exported using Python 3.
  • machineType (required): the type of virtual machine that AI Platform Prediction uses for the nodes that serve predictions. Select a supported machine type for AI Explanations.

See more information about each of these parameters in the AI Platform Training and Prediction API for a version resource.

  1. Set environment variables to store the path to the Cloud Storage directory where your SavedModel is located, your model name, your version name and your framework choice.

    Replace [VALUES_IN_BRACKETS] with the appropriate values:

    MODEL_DIR="gs://[your_bucket_name]/"
    VERSION="[YOUR-VERSION-NAME]"
    MODEL="[YOUR-MODEL-NAME]"
    FRAMEWORK="[YOUR-FRAMEWORK_NAME]"
    
  2. Create the version:

    EXPLAIN_METHOD="integrated-gradients"
    gcloud beta ai-platform versions create $VERSION \
    --model $MODEL \
    --origin $MODEL_DIR \
    --runtime-version 1.14 \
    --framework $FRAMEWORK \
    --python-version 3.5 \
    --machine-type n1-standard-4 \
    --explanation-method $EXPLAIN_METHOD \
    --num-integral-steps 25
    

    Creating the version takes a few minutes. When it is ready, you should see the following output:

    Creating version (this might take a few minutes)......done.
  3. Ensure that your model deployed correctly:

    gcloud ai-platform versions describe $VERSION_NAME \
      --model $MODEL_NAME
    

    Check that the state is READY. You should see output similar to this:

    createTime: '2018-02-28T16:30:45Z'
    deploymentUri: gs://your_bucket_name
    framework: [YOUR-FRAMEWORK-NAME]
    machineType: mls1-c1-m2
    name: projects/[YOUR-PROJECT-ID]/models/[YOUR-MODEL-NAME]/versions/[YOUR-VERSION-NAME]
    pythonVersion: '3.5'
    runtimeVersion: '1.14'
    state: READY

Supported machine types for explanations

For AI Explanations requests, you must deploy your model version with one of the following machine types, which are available in beta. If you don't specify a machine type, the deployment fails.

The following table compares the available machine types:

Name Availability vCPUs Memory (GB) Supports GPUs? ML framework support Max model size
n1-standard-2 Beta 2 7.5 Yes Only TensorFlow SavedModel 2 GB
n1-standard-4 Beta 4 15 Yes Only TensorFlow SavedModel 2 GB
n1-standard-8 Beta 8 30 Yes Only TensorFlow SavedModel 2 GB
n1-standard-16 Beta 16 60 Yes Only TensorFlow SavedModel 2 GB
n1-standard-32 Beta 32 120 Yes Only TensorFlow SavedModel 2 GB

Learn about pricing for each machine type. Read more about the detailed specifications of Compute Engine (N1) machine types in the Compute Engine documentation.

Format input data

The basic format for online prediction is a list of data instances. These can be either plain lists of values or members of a JSON object, depending on how you configured your inputs in your training application. Learn how to format complex inputs and binary data for prediction.

This example shows an input tensor and an instance key to a TensorFlow model:

{"values": [1, 2, 3, 4], "key": 1}

The makeup of the JSON string can be complex as long as it follows these rules:

  • The top level of instance data must be a JSON object: a dictionary of key/value pairs.

  • Individual values in an instance object can be strings, numbers, or lists. You cannot embed JSON objects.

  • Lists must contain only items of the same type (including other lists). You may not mix string and numerical values.

You pass input instances for online prediction as the message body for the projects.predict call. Learn more about the request body's formatting requirements.

  1. To submit your request with gcloud, ensure that your input file is a newline-delimited JSON file, with each instance as a JSON object, one instance per line.

    {"values": [1, 2, 3, 4], "key": 1}
    {"values": [5, 6, 7, 8], "key": 2}
    

Request predictions and explanations

Request your predictions and explanations:

gcloud beta ai-platform explain \
  --model $MODEL \
  --version $VERSION \
  --json-instances='your-data.txt'

Understand the explanations response

Learn how to parse the explanations request by referring to the example notebooks. Both notebooks demonstrate how to parse an individual feature attribution, and also how to parse a batch of ten feature attributions.

Check your explanations

The following code example helps you to check a batch of explanations and see if you need to adjust your baselines.

In the code, you only need to update your input key value according to what you specified in your explanation_metadata.json file.

{
    "inputs": {
      "YOUR_INPUT_KEY_VALUE": {
        "input_tensor_name": "YOUR_INPUT_TENSOR_NAME"
        "input_baselines": [0]
      }
    ...
}

For example, if your input key value was "data", then the same value would be "data" in line 4 of the following code snippet:

def check_explanations(example, mean_tgt_value=None, variance_tgt_value=None):
  passed_test = 0
  total_test = 1
  attribution_vals = example['attributions_by_label'][0]['attributions']['YOUR-INPUT-KEY-VALUE']

  baseline_score = example['attributions_by_label'][0]['baseline_score']
  sum_with_baseline = np.sum(attribution_vals) + baseline_score
  predicted_val = example['attributions_by_label'][0]['example_score']

  # Check 1
  # The prediction at the input is equal to that at the baseline.
  #  Please use a different baseline. Some suggestions are: random input, training
  #  set mean.
  if abs(predicted_val - baseline_score) <= 0.05:
    print('Warning: example score and baseline score are too close.')
    print('You might not get attributions.')
  else:
    passed_test += 1

  # Check 2 (only for models using Integrated Gradient explanations)
  # Ideally, the sum of the integrated gradients must be equal to the difference
  # in the prediction probability at the input and baseline. Any discrepancy in
  # these two values is due to the errors in approximating the integral.
  if explain_method == 'integrated-gradients':
    total_test += 1
    want_integral = predicted_val - baseline_score
    got_integral = sum(attribution_vals)
    if abs(want_integral-got_integral)/abs(want_integral) > 0.05:
        print('Warning: Integral approximation error exceeds 5%.')
        print('Please try increasing the number of integrated gradient steps.')
    else:
        passed_test += 1

  print(passed_test, ' out of ', total_test, ' sanity checks passed.')

When parsing your explanations, you can run these checks on each attribution you have received:

for i in attributions_resp['explanations']:
  check_explanations(i)

What's next

Oliko tästä sivusta apua? Kerro mielipiteesi

Palautteen aihe:

Tämä sivu
AI Platform
Tarvitsetko apua? Siirry tukisivullemme.