Training ShapeMask on Cloud TPU

This document demonstrates how to run the ShapeMask model using Cloud TPU with the COCO dataset.

The instructions below assume you are already familiar with running a model on Cloud TPU. If you are new to Cloud TPU, you can refer to the Quickstart for a basic introduction.

If you plan to train on a TPU Pod slice, review Training on TPU Pods to understand parameter changes required for Pod slices.


  • Create a Cloud Storage bucket to hold your dataset and model output
  • Prepare the COCO dataset
  • Set up a Compute Engine VM and Cloud TPU node for training and evaluation
  • Run training and evaluation on a single Cloud TPU or a Cloud TPU Pod


This tutorial uses billable components of Google Cloud, including:

  • Compute Engine
  • Cloud TPU
  • Cloud Storage

Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.

Before you begin

This section provides information on setting up Cloud Storage bucket and a Compute Engine VM.

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Create a variable for your project's name.

    export PROJECT_NAME=project-name
  3. Configure gcloud command-line tool to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_NAME}
  4. Create a Cloud Storage bucket using the following command:

    gsutil mb -p ${PROJECT_NAME} -c standard -l europe-west4 -b on gs://bucket-name

    This Cloud Storage bucket stores the data you use to train your model and the training results.

  5. Launch a Compute Engine VM instance.

    $ ctpu up --zone=europe-west4-a \
     --vm-only \
     --disk-size-gb=300 \
     --machine-type=n1-standard-16 \
     --tf-version=1.15 \
  6. The configuration you specified appears. Enter y to approve or n to cancel.

  7. When the ctpu up command has finished executing, verify that your shell prompt has changed from username@projectname to username@vm-name. This change shows that you are now logged into your Compute Engine VM.

    gcloud compute ssh shapemask-tutorial --zone=europe-west4-a

    As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

Prepare the COCO dataset

  1. Create an environment variable to store your Cloud Storage bucket location.

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
  2. Create an environment variable for the data directory.

    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/coco
  3. Clone the tpu repository.

    (vm)$ git clone -b shapemask
  4. Install the packages needed to pre-process the data.

    (vm)$ sudo apt-get install -y python3-tk && \
      pip3 install --user Cython matplotlib opencv-python-headless pyyaml Pillow && \
      pip3 install --user "git+"
  5. Run the script to convert the COCO dataset into a set of TFRecords (*.tfrecord) that the training application expects.

    (vm)$ sudo bash /usr/share/tpu/tools/datasets/ ./data/dir/coco

    This installs the required libraries and then runs the preprocessing script. It outputs a number of *.tfrecord files in your local data directory.

  6. After you convert the data into TFRecords, copy them from local storage to your Cloud Storage bucket using the gsutil command. You must also copy the annotation files. These files help validate the model's performance:

    (vm)$ gsutil -m cp ./data/dir/coco/*.tfrecord ${DATA_DIR}
    (vm)$ gsutil cp ./data/dir/coco/raw-data/annotations/*.json ${DATA_DIR}

Training the models

  1. Launch a Cloud TPU resource.

    Run the following command to create your Cloud TPU.

    (vm)$ ctpu up --tpu-only --tf-version=1.15 --name=shapemask-tutorial
  2. Set up the following environment variables:

    (vm)$ export TPU_NAME=shapemask-tutorial
    (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/shapemask_exp
    (vm)$ export RESNET_CHECKPOINT=gs://cloud-tpu-checkpoints/shapemask/retinanet/resnet101-checkpoint-2018-02-24
    (vm)$ export TRAIN_FILE_PATTERN=${DATA_DIR}/train-*
    (vm)$ export EVAL_FILE_PATTERN=${DATA_DIR}/val-*
    (vm)$ export VAL_JSON_FILE=${DATA_DIR}/instances_val2017.json
    (vm)$ export SHAPE_PRIOR_PATH=gs://cloud-tpu-checkpoints/shapemask/kmeans_class_priors_91x20x32x32.npy
    (vm)$ export PYTHONPATH=${PYTHONPATH}:$HOME/tpu/models
  3. Run the training script:

    (vm)$ python3 ~/tpu/models/official/detection/ \
    --model shapemask \
    --use_tpu=True \
    --tpu=${TPU_NAME} \
    --num_cores=8 \
    --model_dir="${MODEL_DIR}" \
    --mode="train" \
    --eval_after_training=True \
    --params_override="{train: {iterations_per_loop: 1000, train_batch_size: 64, total_steps: 1000, learning_rate: {total_steps: 1000, warmup_learning_rate: 0.0067, warmup_steps: 500, init_learning_rate: 0.08, learning_rate_levels: [0.008, 0.0008], learning_rate_steps: [30000, 40000]}, checkpoint: { path: ${RESNET_CHECKPOINT}, prefix: resnet101/ }, train_file_pattern: ${TRAIN_FILE_PATTERN} }, resnet: {resnet_depth: 101}, eval: { val_json_file: ${VAL_JSON_FILE}, eval_file_pattern: ${EVAL_FILE_PATTERN}, eval_samples: 5000 }, shapemask_head: {use_category_for_mask: true, shape_prior_path: ${SHAPE_PRIOR_PATH}}, shapemask_parser: {output_size: [1024, 1024]}, }"

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  2. In your Cloud Shell, run ctpu delete with the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:

    $ ctpu delete --zone=europe-west4-a --name=shapemask-tutorial
  3. Run the following command to verify the Compute Engine VM and Cloud TPU have been shut down:

    $ ctpu status --zone=europe-west4-a --name=shapemask-tutorial

    The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    2018/04/28 16:16:23 WARNING: Setting zone to "europe-west4-a"
    No instances currently exist.
       Compute Engine VM:     --
       Cloud TPU:             --
  4. Run gsutil as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://bucket-name

What's next

Train with different image sizes

You can explore using a larger backbone network (for example, ResNet-101 instead of ResNet-50). A larger input image and a more powerful backbone will yield a slower but more precise model.

Use a different basis

Alternatively, you can explore pre-training a ResNet model on your own dataset and using it as a basis for your ShapeMask model. With some more work, you can also swap in an alternative backbone network in place of ResNet. Finally, if you are interested in implementing your own object detection models, this network may be a good basis for further experimentation.