Training ShapeMask on Cloud TPU

This document demonstrates how to run the ShapeMask model using Cloud TPU with the COCO dataset.

The instructions below assume you are already familiar with running a model on Cloud TPU. If you are new to Cloud TPU, you can refer to the Quickstart for a basic introduction.

If you plan to train on a TPU Pod slice, review Training on TPU Pods to understand parameter changes required for Pod slices.

Objectives

  • Create a Cloud Storage bucket to hold your dataset and model output
  • Prepare the COCO dataset
  • Set up a Compute Engine VM and Cloud TPU node for training and evaluation
  • Run training and evaluation on a single Cloud TPU or a Cloud TPU Pod

Costs

This tutorial uses billable components of Google Cloud, including:

  • Compute Engine
  • Cloud TPU
  • Cloud Storage

Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.

Before you begin

This section provides information on setting up Cloud Storage storage and a Compute Engine VM.

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Create a variable for your project's name.

    export PROJECT_NAME=your-project_name
    
  3. Configure gcloud command-line tool to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_NAME}
    
  4. Create a Cloud Storage bucket using the following command:

    gsutil mb -p ${PROJECT_NAME} -c standard -l europe-west4 -b on gs://your-bucket-name
    

    This Cloud Storage bucket stores the data you use to train your model and the training results.

  5. Launch a Compute Engine VM instance.

    $ gcloud compute instances create shapemask-tutorial \
     --zone europe-west4-a \
     --image-project=ml-images \
     --image=debian-9-tf-1-14-v20190910 \
     --network default \
     --machine-type n1-standard-16 \
     --scopes cloud-platform \
     --boot-disk-size=500GB
    
  6. Connect to the VM instance.

    gcloud compute ssh shapemask-tutorial --zone=europe-west4-a
    

As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

Prepare the COCO dataset

  1. Create a variable to store your Cloud Storage bucket location.

    (vm)$ export STORAGE_BUCKET=your-bucket-name
    
  2. Clone the tpu repository.

    (vm)$ git clone -b shapemask https://github.com/tensorflow/tpu/
    
  3. Install the packages needed to pre-process the data.

    (vm)$ sudo apt-get install -y python-tk && \
      pip install --user Cython matplotlib opencv-python-headless pyyaml Pillow && \
      pip install --user 'git+https://github.com/cocodataset/cocoapi#egg=pycocotools&subdirectory=PythonAPI'
    
  4. Create a directory to store the COCO data and navigate to it.

    (vm)$ mkdir ~/data && \
    mkdir ~/data/coco && \
    cd ~/tpu/tools/datasets
    
  5. Run the download_and_preprocess_coco.sh script to convert the COCO dataset into a set of TFRecords (*.tfrecord) that the training application expects.

    (vm)$ sudo bash /usr/share/tpu/tools/datasets/download_and_preprocess_coco.sh ./data/dir/coco
    

    This installs the required libraries and then runs the preprocessing script. It outputs a number of *.tfrecord files in your local data directory. The COCO download and conversion script takes approximately 1 hour to complete.

  6. After you convert the data into TFRecords, copy them from local storage to your Cloud Storage bucket using the gsutil command. You must also copy the annotation files. These files help validate the model's performance:

    (vm)$ gsutil -m cp ./data/dir/coco/*.tfrecord gs://${STORAGE_BUCKET}/coco && \
    gsutil cp ./data/dir/coco/raw-data/annotations/*.json gs://${STORAGE_BUCKET}/coco
    

Training the models

  1. Launch a Cloud TPU resource.

    $ gcloud compute tpus create shapemask-tutorial \
     --range=10.4.27.0 \
     --accelerator-type v3-8 \
     --version nightly \
     --network=default \
     --zone=europe-west4-a
    
  2. Set up the following environment variables:

    (vm)$ TPU_NAME=shapemask-tutorial; \
    MODEL_DIR=gs://${STORAGE_BUCKET}/shapemask_exp/; \
    RESNET_CHECKPOINT=gs://cloud-tpu-checkpoints/shapemask/retinanet/resnet101-checkpoint-2018-02-24; \
    TRAIN_FILE_PATTERN=gs://${STORAGE_BUCKET}/coco/train-*; \
    EVAL_FILE_PATTERN=gs://${STORAGE_BUCKET}/coco/val-*; \
    VAL_JSON_FILE=gs://${STORAGE_BUCKET}/coco/instances_val2017.json; \
    SHAPE_PRIOR_PATH=gs://cloud-tpu-checkpoints/shapemask/kmeans_class_priors_91x20x32x32.npy; \
    PYTHONPATH=$PYTHONPATH:~/tpu/models
    
  3. Run the training script:

    (vm)$ python ~/tpu/models/official/detection/main.py \
    --model shapemask \
    --use_tpu=True \
    --tpu=${TPU_NAME} \
    --num_cores=8 \
    --model_dir="${MODEL_DIR}" \
    --mode="train" \
    --eval_after_training=True \
    --params_override="{train: {iterations_per_loop: 1000, train_batch_size: 64, total_steps: 1000, learning_rate: {total_steps: 1000, warmup_learning_rate: 0.0067, warmup_steps: 500, init_learning_rate: 0.08, learning_rate_levels: [0.008, 0.0008], learning_rate_steps: [30000, 40000]}, checkpoint: { path: ${RESNET_CHECKPOINT}, prefix: resnet101/ }, train_file_pattern: ${TRAIN_FILE_PATTERN} }, resnet: {resnet_depth: 101}, eval: { val_json_file: ${VAL_JSON_FILE}, eval_file_pattern: ${EVAL_FILE_PATTERN}, eval_samples: 5000 }, shapemask_head: {use_category_for_mask: true, shape_prior_path: ${SHAPE_PRIOR_PATH}}, shapemask_parser: {output_size: [1024, 1024]}, }"
    

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit
    

    Your prompt should now be user@projectname, showing you are in the Cloud Shell.

  2. In your Cloud Shell, run ctpu delete with the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:

    $ ctpu delete --zone=europe-west4-a
    
  3. Run the following command to verify the Compute Engine VM and Cloud TPU have been shut down:

    $ ctpu status --zone=europe-west4-a
    

    The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    2018/04/28 16:16:23 WARNING: Setting zone to "europe-west4-a"
    No instances currently exist.
            Compute Engine VM:     --
            Cloud TPU:             --
    
  4. Run gsutil as shown, replacing your-bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://your-bucket-name
    

What's next

Train with different image sizes

You can explore using a larger backbone network (for example, ResNet-101 instead of ResNet-50). A larger input image and a more powerful backbone will yield a slower but more precise model.

Use a different basis

Alternatively, you can explore pre-training a ResNet model on your own dataset and using it as a basis for your ShapeMask model. With some more work, you can also swap in an alternative backbone network in place of ResNet. Finally, if you are interested in implementing your own object detection models, this network may be a good basis for further experimentation.