This page explains how to save a TensorFlow model for use with AI Explanations, whether you're using TensorFlow 2.x or TensorFlow 1.15.
TensorFlow 2
If you're working with TensorFlow 2.x, use tf.saved_model.save
to save your
model.
A common option for optimizing saved TensorFlow models is for users to provide
signatures. You can specify input signatures when saving your model.
If you have only one input signature, AI Explanations automatically uses the
default serving function for your explanations requests, following the default
behavior of tf.saved_model.save
. Learn more about
specifying serving signatures in TensorFlow.
Multiple input signatures
If your model has more than one input signature, AI Explanations can't automatically determine which signature definition to use when retrieving a prediction from your model. Therefore, you must specify which signature definition you want AI Explanations to use. When you save your model, specify the signature of your serving default function in a unique key, xai-model
:
tf.saved_model.save(m, model_dir, signatures={
'serving_default': serving_fn,
'xai_model': my_signature_default_fn # Required for AI Explanations
})
In this case, AI Explanations uses the model function signature you provided with the
xai_model
key to interact with your model and generate explanations. Use the exact string xai_model
for the key. See this overview of Signature Defs for more background.
Preprocessing functions
If you use a preprocessing function, you must specify the signatures for
your preprocessing function and your model function when you save your model. Use
the xai_preprocess
key to specify your preprocessing function:
tf.saved_model.save(m, model_dir, signatures={
'serving_default': serving_fn,
'xai_preprocess': preprocess_fn, # Required for AI Explanations
'xai_model': model_fn # Required for AI Explanations
})
In this case, AI Explanations uses your preprocessing function and your model function for your explanation requests. Make sure that the output of your preprocessing function matches the input that your model function expects.
Try the full TensorFlow 2 example notebooks:
TensorFlow 1.15
If you're using TensorFlow 1.15, don't use tf.saved_model.save
.
This function isn't supported with AI Explanations when using TensorFlow 1. Instead, use tf.estimator.export_savedmodel
in conjunction with an appropriate
tf.estimator.export.ServingInputReceiver
Models built with Keras
If you build and train your model in Keras, you must convert your model to a TensorFlow Estimator, and then export it to a SavedModel. This section focuses on saving a model. For a full working example, see the example notebooks:
After you build, compile, train, and evaluate your Keras model, you must do the following:
- Convert the Keras model to a TensorFlow Estimator, using
tf.keras.estimator.model_to_estimator
- Provide a serving input function, using
tf.estimator.export.build_raw_serving_input_receiver_fn
- Export the model as a SavedModel, using
tf.estimator.export_saved_model
.
# Build, compile, train, and evaluate your Keras model
model = tf.keras.Sequential(...)
model.compile(...)
model.fit(...)
model.predict(...)
## Convert your Keras model to an Estimator
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')
## Define a serving input function appropriate for your model
def serving_input_receiver_fn():
...
return tf.estimator.export.ServingInputReceiver(...)
## Export the SavedModel to Cloud Storage using your serving input function
export_path = keras_estimator.export_saved_model(
'gs://' + 'YOUR_BUCKET_NAME', serving_input_receiver_fn).decode('utf-8')
print("Model exported to: ", export_path)
What's next
- Learn how to use the Explainable AI SDK.
- To visualize your explanations, you can use the What-If Tool. Refer to the example notebooks to learn more.