Google Cloud Platform

Performing prediction with TensorFlow object detection models on Google Cloud Machine Learning Engine

In June, we posted a blog that taught you the basics on training a new object detection model using Google Cloud Machine Learning Engine. Today, we’ll help you take your knowledge one step further. This follow-up blog post will first teach you how to export a trained model into the SavedModel format, then deploy the model on Cloud Machine Learning Engine. Then, you’ll learn how to serve the model and perform prediction by using both online and batch prediction services from Cloud Machine Learning Engine. Let’s get started.

Export the model

This blog post assumes you’ve already trained the object detection model using the command line below from the previous blog. The job-dir should be ${YOUR_GCS_BUCKET}/train where the checkpoint files are saved. Alternatively, you can use the checkpoint files in pre-trained models in the model zoo.

  $ JOB_ID=`whoami`_object_detection_training_`date +%s`
$ gcloud ml-engine jobs submit training ${JOB_ID}\
    --job-dir=${YOUR_GCS_BUCKET}/train \
    --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
    --module-name object_detection.train \
    --region us-central1 \
    --config object_detection/samples/cloud/cloud.yml \
    -- \
    --train_dir=${YOUR_GCS_BUCKET}/train \
    --pipeline_config_path=${YOUR_GCS_BUCKET}/data/ssd_mobilenet_v1_pets.config

Download checkpoint files

First, download the trained checkpoint files to your local machine in the ${YOUR_LOCAL_CHK_DIR} to speed up the exporting, as running exporter against the files on Google Cloud Storage is slow. Typically, the latest check point files with largest checkpoint number is picked.

  $ gsutil cp ${YOUR_GCS_BUCKET}/train/model.ckpt-${CHECKPOINT_NUMBER}.* ${YOUR_LOCAL_CHK_DIR}/

This can take a minute or two depending the size of the checkpoint files and your network connection.

Export to the SavedModel model

Next, export the inference graph with the weights into a SavedModel model, which can be served by the Cloud Machine Learning Engine prediction service.

Three input types are supported in the exporter:

  • “image-tensor” — Accepts a batch of image arrays.
  • “encoded_image_string_tensor” — Accepts a batch of JPEG- or PNG-encoded images stored in byte strings.
  • “tf_example_string_tensor” — Accepts a batch of strings each of which is a serialized tf. Example proto that wraps image bytestrings.

Since most images are encoded in JPEG/PNG format in compressed form, we’ll use the encoded_image_string_tensor input type in this example for its efficiency over the wire.

  $ OBJECT_DETECTION_CONFIG=object_detection/samples/configs/ssd_mobilenet_v1_pets.config
$ python object_detection/export_inference_graph.py \
    --input_type encoded_image_string_tensor \
    --pipeline_config_path ${OBJECT_DETECTION_CONFIG} \
    --trained_checkpoint_prefix ${YOUR_LOCAL_CHK_DIR}/model.ckpt-${CHECKPOINT_NUMBER} \
    --output_directory ${YOUR_LOCAL_EXPORT_DIR}

${YOUR_LOCAL_EXPORT_DIR} is the path to which you want to save the exported model, and must not already exist as the exporting program will create the folder for you. Note that the config file specified in --pipeline_config_path must match the one used in training.

Once running successfully, the exporter generates the saved_model.pb under the output_directory/saved_model with possible variable files.

You may want to check to be sure the expected files are in the output folder before proceeding.

  $ ls ${YOUR_LOCAL_EXPORT_DIR}/saved_model
saved_model.pb  variables/

You can also inspect the exported model by running saved_model_cli. It will show the input tensor names and output tensor names together with signatures defined. An example command line like the one below will show all available information.

  $ saved_model_cli show --dir ${YOUR_LOCAL_EXPORT_DIR}/saved_model --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
    dtype: DT_STRING
    shape: (-1)
    name: encoded_image_string_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 100, 4)
    name: detection_boxes:0
outputs['detection_classes'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 100)
    name: detection_classes:0
outputs['detection_scores'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1, 100)
    name: detection_scores:0
outputs['num_detections'] tensor_info:
    dtype: DT_FLOAT
    shape: (-1)
    name: num_detections:0
Method name is: tensorflow/serving/predict

Note that the first dimensions of the input and output tensors should be -1 as this is required by the Cloud Machine Learning engine to support batched-inputs. Also the tensor aliases for input and output tensors are shown in the bracket for each tensor. The aliases (e.g., “inputs”), instead of the actual tensor names (e.g., “encoded_image_string_tensor:0”), are used in the input request or files. This applies to outputs as well.

Prepare the inputs

Typically, we specify a list of JSON objects as the inputs for the prediction service. It has a form like below:

  {“input_tensor_name1”: value1, “input_tensor_name2”: some_value1}
{“input_tensor_name1”: value2, “input_tensor_name2”: some_value2}
{“input_tensor_name1”: value3, “input_tensor_name2”: some_value3}
...

However, the object_detection model only has one input tensor that accepts one, or a batch, of images. In this case, we can put the image bytes directly in each record without specifying input tensor names. Since this model accepts image bytes for local or online prediction, the input instances should be base64-encoded and packed into JSON objects. An example of the content of the input file would look like:

  {"b64": "9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHJC...”}
{“b64”: “yVFJDAAAEPAABDDDpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIys...”}

For batch prediction in this particular case (one string input in the inference graph), the image bytes should be packed into records in TFRecord format and saved into files. The following code shows how to do that in Python. If the inference graph has two or more input tensors, the batch prediction accepts a list of JSON objects just as what local or online prediction does.

For efficiency purposes, this model requires each image in a request have the same heights and widths. Otherwise, the prediction will fail. If you have a lot of images in a variety of sizes, you can resize them with the script shown below. We’ll revisit this requirement later.

  import base64
import io
import json
from PIL import Image
import tensorflow as tf
width = 1024
height = 768
predict_instance_json = "inputs.json"
predict_instance_tfr = "inputs.tfr"
with tf.python_io.TFRecordWriter(predict_instance_tfr) as tfr_writer:
  with open(predict_instance_json, "wb") as fp:
    for image in ["image1.jpg", "image2.jpg"]:
      img = Image.open(image)
      img = img.resize((width, height), Image.ANTIALIAS)
      output_str = io.BytesIO()
      img.save(output_str, "JPEG")
      fp.write(
          json.dumps({"b64": base64.b64encode(output_str.getvalue())}) + "\n")
      tfr_writer.write(output_str.getvalue())
      output_str.close()

The above code generates two files for a given set of images after resizing them to be the same size: one JSON file for local and online prediction, and one TFRecord for batch prediction.

Run local prediction

When the inputs and model are ready in your local machine, you can run local prediction to quickly verify if the exported model works correctly. Debugging locally will help you ensure that the model and input data exist and are well formatted before deploying to Cloud Machine Learning Engine.

The inputs should be reasonably small for local prediction, typically containing 2-3 instances. The following shows how to run local prediction using gcloud.

  $ gcloud ml-engine local predict \
    --model-dir ${YOUR_LOCAL_EXPORT_DIR}/saved_model --json-instances=inputs.json
DETECTION_BOXES                      DETECTION_CLASSES  DECTION_SCORES DETECTION_NUM
[[.273, 0.29, 0.849, 0.144..], [], []] [24, 11...]      [0.9, 0.1...}] 100   
[[.221, 0.39, 0.742, 0.254..], [], []] [23, 33...]      [0.7, 0.3...}] 100

The output shows the prediction results for the two input instances. The detection of the first instances results in 100 objects with the bounding boxes and the corresponding classes and scores. The bounding box values are normalized to [0, 1) and are ordered ymin, xmin, ymax, xmax. The second output row shows the results for the second image sent to prediction.

Run predictions on Cloud Machine Learning Engine

Deploy model for serving

Once you verified the model and a small set of inputs work using "gcloud local prediction", you may proceed to use Cloud Machine Learning Engine to host your model for serving. A model is deployed by creating a Cloud Machine Learning model and version for the exported model. This step is mandatory for online prediction but optional for batch prediction.

First copy the exported model to a Cloud Storage location:

  $ gsutil cp -r ${YOUR_LOCAL_EXPORT_DIR}/saved_model/ ${YOUR_GCS_BUCKET}/model_dir_tmp/

Then, create a model and version in Google Cloud from the exported directory using the following gcloud command. Note that --runtime-version must be specified to be 1.2 as the default TensorFlow version in the service might be older and therefore won't work with this model.

  $ YOUR_MODEL=object_detection_model
$ YOUR_VERSION=v1
$ gcloud ml-engine models create ${YOUR_MODEL} --regions us-central1
Created ml engine model [projects/${YOUR_PROJECT}/models/${YOUR_MODEL}].
$ gcloud ml-engine versions create ${YOUR_VERSION} --model ${YOUR_MODEL} \  
    --origin=${YOUR_GCS_BUCKET}/model_dir_tmp/saved_model --runtime-version=1.2
Creating version (this might take a few minutes)......done.

While model creation is instant, it can take 3-5 minutes to create a version. After a version is created, you can run the following to list the models in your project.

  $ gcloud ml-engine models list
object_detection_model         v1

Alternatively, you can go to Cloud Console to see the models and version. Click the model and version to see more detailed information:

object-detection-models-compute-engine-3mjsg.PNG

Cloud Machine Learning Engine provides a whole set of APIs for model and version management. Please refer to the documentation for more information.

object-detection-models-compute-engine-6oe5a.PNG
object-detection-models-compute-engine-4y0fc.PNG

Run online prediction

Once the model is deployed, you can send prediction requests to your model. In typical applications (web, mobile, etc.), you would do so using a client in the language you're using. This is described in the official docs).

However, during development, you may find it convenient to use "gcloud" to issue prediction requests to your model, as in the following example command:

  $ gcloud ml-engine predict --model=${YOUR_MODEL} --version=${YOUR_VERSION} --json-instances=inputs.json
DETECTION_BOXES                      DETECTION_CLASSES  DECTION_SCORES DETECTION_NUM
[[.273, 0.29, 0.849, 0.144..], [], []] [24, 11...]    [0.9, 0.1...}] 100   
[[.221, 0.39, 0.742, 0.254..], [], []] [23, 33...]    [0.7, 0.3...}] 100

The inputs.json is read into the payload of the HTTP request. Model and version information is also embedded there. Upon receiving the request, the service performs inference for all the instances and returns the prediction results to the user in real-time fashion.

Note that the prediction results are re-formatted as a tab-delimited table by the gcloud command line. If you send an http request directly, for example using “curl”, a JSON object is returned which is the same as what you'll find for the prediction result of the batch prediction.

  $ curl -m 180 -X POST -v -k -H "Content-Type: application/json" \
    -d @payload.json  \
    -H "Authorization: Bearer `gcloud auth print-access-token`" \
https://ml.googleapis.com/v1/projects/${YOUR_PROJECT}/models/${YOUR_MODEL}/versions/${YOUR_VERSION}:predict
 {"predictions": [{"detection_boxes": [[.273, 0.29, 0.849, 0.144], [..] ], "detection_classes": [24.0, 11], dection_scores: [0.9,0.1…], dectection_number: [100]}, 
{ "detection_boxes": [[.273, 0.29, 0.849, 0.144], [], ], "detection_classes": [24.0, 11], dection_scores: [0.9,0.1…], dectection_number: [100]},…}

* To craft the payload file for curl command, you need to wrap the instances in JSON into the values of a JSON object with the key being “instances”. See details in this stackoverflow question.

Run batch prediction

Since the model only accepts one string input, we need to pack the images into TFRecord format for batch prediction. (See the Prepare the Inputs section for how to generate TFRecord files.) First, copy the input file(s) to a Cloud Storage location.

  $ gsutil cp inputs.tfr ${YOUR_GCS_BUCKET}/data_dir/

Run gcloud command to submit a batch prediction job:

  $ JOB_ID=`whoami`_object_detection_batch_prediction_`date +%s`
$ YOUR_OUTPUT_DIR=$YOUR_GCS_BUCKET/output/${JOB_ID}
$ gcloud ml-engine jobs submit prediction ${JOB_ID} --data-format=TF_RECORD \
    --input-paths=${YOUR_GCS_BUCKET}/data_dir/inputs.tfr \
    --output-path ${YOUR_OUTPUT_DIR} \
    --region us-central1 \
    --model=${YOUR_MODEL} \
    --version=${YOUR_VERSION}

If the job is successfully submitted, you'll see that the job is queued. The command lines to describe the job and stream the logs to your local console are also shown below. Let’s check the job status first:

  $ gcloud ml-engine jobs describe ${JOB_ID}

This shows the job status including the job state, node hours consumed thus far, the number of predicted instances and other information. Example output:

  createTime: 'some_time_stamp'
jobId: ${JOB_ID}
  dataFormat: TF_RECORD
  inputPaths:
  -${YOUR_GCS_BUCKET}/data_dir/inputs.tfr
  outputPath: ${YOUR_OUTPUT_DIR}
  region: us-central1
  runtimeVersion: '1.2'
  versionName: projects/${PROJECT}/models/${YOUR_MODEL}/versions/${YOUR_VERSION}
predictionOutput:
  nodeHours: 0.11
  outputPath: ${YOUR_OUTPUT_DIR}
  predictionCount: '2'
startTime: 'some_other_time_stamp'
state: RUNNING

You can also see the job and its status in the Cloud Machine Learning Engine section in Pantheon UI:

object-detection-models-compute-engine-143bb.PNG

To display the job details, click the Job ID.

object-detection-models-compute-engine-2otuq.PNG

Once the job succeeds —  i.e. the output state is SUCCEEDED — check the output Cloud Storage location:

  $ gsutil ls ${YOUR_OUTPUT_DIR}*
YOUR_OUTPUT_DIR/prediction.results-00000-of-00001
YOUR_OUTPUT_DIR/prediction.errors_stats-00000-of-00001

The resultant files contain the prediction results in JSON format. The error stats files contain the error stats, if applicable.

  $ gsutil cat ${YOUR_OUTPUT_DIR}/prediction.results-*
{"num_detections": 2, "detection_boxes": [[0.006782746408134699, 0.8335567116737366, 0.047147128731012344, 0.8897725343704224], [0.005502260755747557, 0.8215602040290833, 0.05292670801281929, 0.8864550590515137]], "detection_classes": [7.0, 27.0], "detection_scores": [0.01492418721318245, 0.004494407679885626]}
{"num_detections": 1, "detection_boxes": [[0.8004983067512512, 0.2218044400215149, 0.9470760226249695, 0.5806139707565308]], , "detection_classes": [6.0], "detection_scores": [0.01492418721318245]}
...

You can also run a batch prediction job without deploying the model. A model dir on Cloud Storage — instead of a model and version — is specified in the request where you’ve saved the model. This is good for faster iteration for batch predictions, but you’ll have to manage the model and version by yourself.

One downside is you may need to explicitly set the runtime version in the request, because the default runtime version might not work for the model. This step is not needed in the case of a deployed model, because the runtime version has been specified when you created the model/version.

Below is the command line to submit a batch prediction job without a model and version:

  $ JOB_ID=`whoami`_object_detection_batch_prediction_`date +%s`
$ YOUR_OUTPUT_DIR_WO_DEPLOYMENT=$YOUR_GCS_BUCKET/output/${JOB_ID}
$ gsutil cp -r ${YOUR_LOCAL_EXPORT_DIR}/saved_model ${YOUR_GCS_BUCKET}/model_dir/ 
$ gcloud ml-engine jobs submit prediction ${JOB_ID} --data-format=TF_RECORD \
    --input-paths=${YOUR_GCS_BUCKET}/data_dir/inputs.tfr \
    --output-path ${YOUR_OUTPUT_DIR_WO_DEPLOYMENT} \
    --data-format TF_RECORD 
    --region us-central1 \
    --model-dir ${YOUR_GCS_BUCKET}/model_dir/saved_model \
    --runtime-version=1.2

You should obtain identical prediction results.

Visualize the result

To visualize the prediction results from online or batch predictions, use the object detection model package. It provides a variety of utils you can find under models/object_detection/ utils, in particular the visualize_boxes_and_labels_on_image_array(). See the example in this ipython notebook. Here’s a sample output:

object-detection-models-compute-engine-59kmw.PNG

Conclusion

We hope this tutorial has provided you with the basics on how to deploy and serve a trained model on Cloud Machine Learning Engine, and perform prediction by using both online and batch prediction services. For more information, check out our documentation here.

Acknowledgments

This tutorial was created in collaboration with Jonathan Huang from Google Research and Machine Intelligence team and received valuable feedback from Robbie Haertel and Josh Cogan from Google Cloud AI team.