Preparing metadata for AI Explanations

When deploying your model, you must submit an explanation metadata file to configure your explanations request. This metadata file must include your model's inputs and outputs, in order to select features for your explanations request. Optionally, you can also include input baselines and configure visualization settings for image data.

This guide explains how to use the Explainable AI SDK to create your explanation metadata file while you are building a new model with TensorFlow 2.x or TensorFlow 1.15, or using an existing TensorFlow 2.x model. To request explanations for an existing TensorFlow 1.x model, you must create your explanation metadata file manually.

Overview of explanation metadata

The explanation metadata file is required for model deployment on AI Explanations. Specifying the inputs and outputs of your model lets you select particular features for your explanations request. You create a file named explanation_metadata.json and upload it to your Cloud Storage bucket in the same directory as your SavedModel. The Explainable AI SDK handles this process for you, while you are building and saving the model.

You can also add optional settings to your explanation metadata file:

Learn more about the explanation metadata file in the API reference.

Installing the Explainable AI SDK

The Explainable AI SDK is a Python SDK that supports models built using:

  • TensorFlow 1.15 and TensorFlow 2.x
  • Python 3.7 or later.

Install the Explainable AI SDK:

pip3 install explainable_ai_sdk

Finding inputs and outputs with the Explainable AI SDK

The Explainable AI SDK helps you find the inputs and outputs when you build or train a TensorFlow model. In each of these cases, you build your explanation metadata file right after you build your model. For Keras and Estimator models, the Explainable AI SDK infers inputs and outputs from your model graph.

Before saving your metadata file, you can update any input and output values that the Explainable AI SDK generates and add additional fields, such as input baselines and visualization preferences.

Explainable AI SDK examples for TensorFlow 2.x

The SavedModelMetadataBuilder works for any SavedModel built using TensorFlow 2.x. In this example, the Explainable AI SDK creates and uploads your explanation metadata file to the same Cloud Storage bucket directory as your model:

from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
builder = SavedModelMetadataBuilder(
    model_path)
builder.save_model_with_metadata('gs://my_bucket/model')  # Save the model and the metadata.

Optionally, you can use the get_metadata() function to check the metadata that is generated before you save and upload it.

If you're using a TensorFlow function to save and upload your model, you can use save_metadata() to build and export just your explanation metadata:

from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
# We want to explain 'xai_model' signature.
builder = SavedModelMetadataBuilder(export_path, signature_name='xai_model')
random_baseline = np.random.rand(192, 192, 3)
builder.set_image_metadata(
    'numpy_inputs',
    input_baselines=[random_baseline.tolist()])
builder.save_metadata(export_path)

This example also shows how to set an input baseline for image data, using random noise as the baseline. Use an input baseline that is most appropriate with your dataset. As a starting point, refer to AI Explanations example baselines. To update your metadata and add more fields, use the setter function that matches your data type: set_numeric_metadata(), set_categorical_metadata(), or set_image_metadata().

Explainable AI SDK examples for TensorFlow 1.15

The Explainable AI SDK supports TensorFlow 1.15 models built with tf.keras, tf.estimator, or tf.Graph.

Keras

The Explainable AI SDK supports Keras models built with the Sequential and Functional APIs, as well as custom models built by subclassing the Model class. Learn more about different types of Keras models.

import tensorflow.compat.v1.keras as keras
from explainable_ai_sdk.metadata.tf.v1 import KerasGraphMetadataBuilder

# Build a model.
model = keras.models.Sequential()
model.add(keras.layers.Dense(32, activation='relu', input_dim=10))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', metrics=['accuracy'])
model.fit(
    np.random.random((1, 10)),
    np.random.randint(2, size=(1, 1)),
    epochs=10,
    batch_size=32)

# Set the path to your model.
model_path = MODEL_PATH

# Build the metadata.
builder = KerasGraphMetadataBuilder(model)
builder.save_model_with_metadata(model_path)

After you build your model, use the metadata builder to get the inputs and outputs.

Estimator

The Explainable AI SDK supports Estimator models. For the metadata builder, you specify input feature columns, the serving input function, and an output key.

import tensorflow.compat.v1 as tf
from explainable_ai_sdk.metadata.tf.v1 import EstimatorMetadataBuilder


# Build a model.
language = tf.feature_column.categorical_column_with_vocabulary_list(
    key='language',
    vocabulary_list=('english', 'korean'),
    num_oov_buckets=1)
language_indicator = tf.feature_column.indicator_column(language)
class_identity = tf.feature_column.categorical_column_with_identity(
    key='class_identity', num_buckets=4)
class_id_indicator = tf.feature_column.indicator_column(class_identity)
age = tf.feature_column.numeric_column(key='age', default_value=0.0)
classifier_dnn = tf.estimator.DNNClassifier(
    hidden_units=[4],
    feature_columns=[age, language_indicator, language_embedding, class_id_indicator])
classifier_dnn.train(input_fn=_train_input_fn, steps=5)

# Build the metadata.
md_builder = EstimatorMetadataBuilder(
    classifier_dnn, [age, language, class_identity],  _get_json_serving_input_fn, 'logits')
model_path = MODEL_PATH

Graph

The Explainable AI SDK also supports TensorFlow models built without the Keras or Estimator APIs.

import tensorflow.compat.v1 as tf
from explainable_ai_sdk.metadata.tf.v1 import GraphMetadataBuilder

# Build a model.
sess = tf.Session(graph=tf.Graph())
with sess.graph.as_default():
  x = tf.placeholder(shape=[None, 10], dtype=tf.float32, name='inp')
  weights = tf.constant(1., shape=(10, 2), name='weights')
  bias_weight = tf.constant(1., shape=(2,), name='bias')
  linear_layer = tf.add(tf.matmul(x, weights), bias_weight)
  prediction = tf.nn.relu(linear_layer)

# Build the metadata.
builder = GraphMetadataBuilder(
    session=sess, tags=['serve'])
builder.add_numeric_metadata(x)
builder.add_output_metadata(prediction)
model_path = os.path.join('gs://', 'BUCKET_NAME', 'PATH_TO_MODEL')
builder.save_model_with_metadata(model_path)

After building the model graph, you build your explanation metadata with one or more model inputs, and one output. The Explainable AI SDK provides a function to help you add metadata for each type of input: numeric, categorical, image, and text.

The resulting explanation_metadata.json file looks similar to this:

{
  "outputs": {
    "Relu": {
      "output_tensor_name": "Relu:0"
    }
  },
  "inputs": {
    "inp": {
      "input_tensor_name": "inp:0",
      "encoding": "identity",
      "modality": "numeric"
    }
  },
  "framework": "Tensorflow",
  "tags": [
    "explainable_ai_sdk"
  ]
}

Optional explanation metadata settings

By default, the Explainable AI SDK produces an explanation metadata file with only the required information about your model's inputs and outputs. Before saving the explanation metadata file, you can include optional configurations for input baselines and image visualization settings. Alternatively, you can edit the explanation metadata file after it is created.

Setting input baselines

Input baselines represent a feature that provides no additional information. Baselines for tabular models can be median, minimum, maximum, or random values in relation to your training data. Similarly, for image models, your baselines can be a black image, a white image, a gray image, or an image with random pixel values.

It's recommended to start with one baseline. If needed, you can change your baseline or use multiple baselines. See examples of how to adjust your baseline.

Configuring visualization settings for image data

If you're working with image data, you can configure how your explanation results are displayed by adding a visualization configuration to your explanation metadata file. If you do not add this, AI Explanations uses default settings.

See examples of the visualization options, or learn more about the settings in the API reference.

What's next