Training ShapeMask on Cloud TPU (TF 2.x)


This document demonstrates how to run the ShapeMask model using Cloud TPU with the COCO dataset.

The instructions below assume you are already familiar with running a model on Cloud TPU. If you are new to Cloud TPU, you can refer to the Quickstart for a basic introduction.

If you plan to train on a TPU Pod slice, review Training on TPU Pods to understand parameter changes required for Pod slices.

Objectives

  • Prepare the COCO dataset
  • Create a Cloud Storage bucket to hold your dataset and model output
  • Set up TPU resources for training and evaluation
  • Run training and evaluation on a single Cloud TPU or a Cloud TPU Pod

Costs

In this document, you use the following billable components of Google Cloud:

  • Compute Engine
  • Cloud TPU
  • Cloud Storage

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

Before starting this tutorial, check that your Google Cloud project is correctly set up.

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Make sure that billing is enabled for your Google Cloud project.

  6. This walkthrough uses billable components of Google Cloud. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up resources you create when you've finished with them to avoid unnecessary charges.

Cloud TPU single device training

This section provides information on setting up Cloud Storage, VM, and Cloud TPU resources for single device training.

If you plan to train on a TPU Pod slice, review Training on TPU Pods to understand the changes required to train on Pod slices.

  1. In your Cloud Shell, create a variable for your project's ID.

    export PROJECT_ID=project-id
    
  2. Configure Google Cloud CLI to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_ID}
    

    The first time you run this command in a new Cloud Shell VM, an Authorize Cloud Shell page is displayed. Click Authorize at the bottom of the page to allow gcloud to make GCP API calls with your credentials.

  3. Create a Service Account for the Cloud TPU project.

    gcloud beta services identity create --service tpu.googleapis.com --project $PROJECT_ID
    

    The command returns a Cloud TPU Service Account with following format:

    service-PROJECT_NUMBER@cloud-tpu.iam.gserviceaccount.com
    

Prepare the COCO dataset

This tutorial uses the COCO dataset. The dataset needs to be in TFRecord format on a Cloud Storage bucket to be used for the training.

The bucket location must be in the same region as your virtual machine (VM) and your TPU node. VMs and TPU nodes are located in specific zones, which are subdivisions within a region.

The Cloud Storage bucket stores the data you use to train your model and the training results. The gcloud compute tpus execution-groups tool used in this tutorial sets up default permissions for the Cloud TPU Service Account you set up in the previous step. If you want finer-grain permissions, review the access level permissions.

If you already have the COCO dataset prepared on a Cloud Storage bucket that is located in the zone you will be using to train the model, you can launch the TPU resources and prepare Cloud TPU for training. Otherwise, use the following steps to prepare the dataset.

  1. In your Cloud Shell, configure gcloud with your project ID.

    export PROJECT_ID=project-id
    gcloud config set project ${PROJECT_ID}
    
  2. In your Cloud Shell, create a Cloud Storage bucket using the following command:

    gsutil mb -p ${PROJECT_ID} -c standard -l europe-west4 gs://bucket-name
    
  3. Launch a Compute Engine VM instance.

    This VM instance will only be used to download and preprocess the COCO dataset. Fill in the instance-name with a name of your choosing.

    $ gcloud compute tpus execution-groups create \
     --vm-only \
     --name=instance-name \
     --zone=europe-west4-a \
     --disk-size=300 \
     --machine-type=n1-standard-16 \
     --tf-version=2.12.0
    

    Command flag descriptions

    vm-only
    Create a VM only. By default the gcloud compute tpus execution-groups command creates a VM and a Cloud TPU.
    name
    The name of the Cloud TPU to create.
    zone
    The zone where you plan to create your Cloud TPU.
    disk-size
    The size of the hard disk in GB of the VM created by the gcloud compute tpus execution-groups command.
    machine-type
    The machine type of the Compute Engine VM to create.
    tf-version
    The version of Tensorflow gcloud compute tpus execution-groups installs on the VM.
  4. If you are not automatically logged in to the Compute Engine instance, log in by running the following ssh command. When you are logged into the VM, your shell prompt changes from username@projectname to username@vm-name:

      $ gcloud compute ssh instance-name --zone=europe-west4-a
      

  5. Set up two variables, one for the storage bucket you created earlier and one for the directory that holds the training data (DATA_DIR) on the storage bucket.

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
    
    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/coco
  6. Install the packages needed to pre-process the data.

    (vm)$ sudo apt-get install -y python3-tk && \
      pip3 install --user Cython matplotlib opencv-python-headless pyyaml Pillow && \
      pip3 install --user "git+https://github.com/cocodataset/cocoapi#egg=pycocotools&subdirectory=PythonAPI"
    
  7. Run the download_and_preprocess_coco.sh script to convert the COCO dataset into a set of TFRecords (*.tfrecord) that the training application expects.

    (vm)$ git clone https://github.com/tensorflow/tpu.git
    (vm)$ sudo bash tpu/tools/datasets/download_and_preprocess_coco.sh ./data/dir/coco
    

    This installs the required libraries and then runs the preprocessing script. It outputs a number of *.tfrecord files in your local data directory. The COCO download and conversion script takes approximately 1 hour to complete.

  8. Copy the data to your Cloud Storage bucket

    After you convert the data into TFRecords, copy them from local storage to your Cloud Storage bucket using the gsutil command. You must also copy the annotation files. These files help validate the model's performance.

    (vm)$ gsutil -m cp ./data/dir/coco/*.tfrecord ${DATA_DIR}
    (vm)$ gsutil cp ./data/dir/coco/raw-data/annotations/*.json ${DATA_DIR}
    
  9. Clean up the VM resources

    Once the COCO dataset has been converted to TFRecords and copied to the DATA_DIR on your Cloud Storage bucket, you can delete the Compute Engine instance.

    Disconnect from the Compute Engine instance:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  10. Delete your Compute Engine instance.

      $ gcloud compute instances delete instance-name
        --zone=europe-west4-a
      

Launch the TPU resources and train the model

  1. Use the gcloud command to launch the TPU resources. The command you use depends on whether you are using TPU VMs or TPU nodes. For more information on the two VM architecture, see System Architecture.

    TPU VM

    $ gcloud compute tpus tpu-vm create shapemask-tutorial \
    --zone=europe-west4-a \
    --accelerator-type=v3-8 \
    --version=tpu-vm-tf-2.16.1-pjrt
    

    Command flag descriptions

    zone
    The zone where you plan to create your Cloud TPU.
    accelerator-type
    The type of the Cloud TPU to create.
    version
    The Cloud TPU software version.

    TPU Node

    $ gcloud compute tpus execution-groups create  \
     --zone=europe-west4-a \
     --name=shapemask-tutorial \
     --accelerator-type=v3-8 \
     --machine-type=n1-standard-8 \
     --disk-size=300 \
     --tf-version=2.12.0
    

    Command flag descriptions

    zone
    The zone where you plan to create your Cloud TPU.
    name
    The TPU name. If not specified, defaults to your username.
    accelerator-type
    The type of the Cloud TPU to create.
    machine-type
    The machine type of the Compute Engine VM to create.
    disk-size
    The root volume size of your Compute Engine VM (in GB).
    tf-version
    The version of Tensorflow gcloud installs on the VM.

    For more information on the gcloud command, see the gcloud Reference.

  2. If you are not automatically logged in to the Compute Engine instance, log in by running the following ssh command. When you are logged into the VM, your shell prompt changes from username@projectname to username@vm-name:

    TPU VM

    gcloud compute tpus tpu-vm ssh shapemask-tutorial --zone=europe-west4-a
    

    TPU Node

    gcloud compute ssh shapemask-tutorial --zone=europe-west4-a
    

    As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

  3. Install TensorFlow requirements.

    TPU VM

    (vm)$ pip3 install -r /usr/share/tpu/models/official/requirements.txt
    

    TPU Node

    (vm)$ pip3 install -r /usr/share/models/official/requirements.txt
    
  4. The training script requires an extra package. Install it now:

    TPU VM

    (vm)$ pip3 install --user tensorflow-model-optimization>=0.1.3
    

    TPU Node

    (vm)$ pip3 install --user tensorflow-model-optimization>=0.1.3
    
  5. Set the storage bucket name variable. Replace bucket-name with the name of your storage bucket:

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
    
  6. Set the Cloud TPU name variable.

    TPU VM

    (vm)$ export TPU_NAME=local
    

    TPU Node

    (vm)$ export TPU_NAME=shapemask-tutorial
    
  7. Set the PYTHONPATH environment variable:

    TPU VM

    (vm)$ export PYTHONPATH="/usr/share/tpu/models:${PYTHONPATH}"
    

    TPU Node

    (vm)$ export PYTHONPATH="${PYTHONPATH}:/usr/share/models"
    
  8. Change to directory that stores the model:

    TPU VM

    (vm)$ cd /usr/share/tpu/models/official/legacy/detection
    

    TPU Node

    (vm)$ cd /usr/share/models/official/legacy/detection
    
  9. Add some required environment variables:

    (vm)$ export RESNET_CHECKPOINT=gs://cloud-tpu-checkpoints/retinanet/resnet50-checkpoint-2018-02-07
    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/coco
    (vm)$ export TRAIN_FILE_PATTERN=${DATA_DIR}/train-*
    (vm)$ export EVAL_FILE_PATTERN=${DATA_DIR}/val-*
    (vm)$ export VAL_JSON_FILE=${DATA_DIR}/instances_val2017.json
    (vm)$ export SHAPE_PRIOR_PATH=gs://cloud-tpu-checkpoints/shapemask/kmeans_class_priors_91x20x32x32.npy
    (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/shapemask
    
  10. When creating your TPU, if you set the --version parameter to a version ending with -pjrt, set the following environment variables to enable the PJRT runtime:

      (vm)$ export NEXT_PLUGGABLE_DEVICE_USE_C_API=true
      (vm)$ export TF_PLUGGABLE_DEVICE_LIBRARY_PATH=/lib/libtpu.so
    
  11. Train the ShapeMask model:

    The following script runs a sample training that trains for just 100 steps and takes approxiately 10 minutes to complete on a v3-8 TPU. To train to convergence takes about 22,500 steps and approximately 6 hours on a v3-8 TPU.

    (vm)$ python3 main.py \
      --strategy_type=tpu \
      --tpu=${TPU_NAME} \
      --model_dir=${MODEL_DIR} \
      --mode=train \
      --model=shapemask \
      --params_override="{train: {total_steps: 100, learning_rate: {init_learning_rate: 0.08, learning_rate_levels: [0.008, 0.0008], learning_rate_steps: [15000, 20000], }, checkpoint: { path: ${RESNET_CHECKPOINT},prefix: resnet50}, train_file_pattern: ${TRAIN_FILE_PATTERN}}, shapemask_head: {use_category_for_mask: true, shape_prior_path: ${SHAPE_PRIOR_PATH}}, shapemask_parser: {output_size: [640, 640]}}"
    

    Command flag descriptions

    strategy_type
    To train the Shapemask model on a TPU, you must set the distribution_strategy to tpu.
    tpu
    The name of the Cloud TPU. This is set using the TPU_NAME environment variable.
    model_dir
    The directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints as long as the previous checkpoints were created using a Cloud TPU of the same size and TensorFlow version.
    mode
    Set this to train to train the model or eval to evaluate the model.
    params_override
    A JSON string that overrides default script parameters. For more information on script parameters, see /usr/share/models/official/legacy/detection/main.py.

    When the training completes, a message similar to the following appears:

    Train Step: 100/100  / loss = {'total_loss': 10.815635681152344,
    'loss': 10.815635681152344, 'retinanet_cls_loss': 1.4915691614151,
    'l2_regularization_loss': 4.483549118041992,
    'retinanet_box_loss': 0.013074751943349838,
    'shapemask_prior_loss': 0.17314358055591583,
    'shapemask_coarse_mask_loss': 1.953366756439209,
    'shapemask_fine_mask_loss': 2.216097831726074, 'model_loss': 6.332086086273193,
    'learning_rate': 0.021359999} / training metric = {'total_loss': 10.815635681152344,
    'loss': 10.815635681152344, 'retinanet_cls_loss': 1.4915691614151,
    'l2_regularization_loss': 4.483549118041992,
    'retinanet_box_loss': 0.013074751943349838,
    'shapemask_prior_loss': 0.17314358055591583,
    'shapemask_coarse_mask_loss': 1.953366756439209,
    'shapemask_fine_mask_loss': 2.216097831726074,
    'model_loss': 6.332086086273193, 'learning_rate': 0.021359999}
    
  12. Run the script to evaluate the ShapeMask model. This takes about 10 minutes on a v3-8 TPU:

    (vm)$ python3 main.py \
        --strategy_type=tpu \
        --tpu=${TPU_NAME} \
        --model_dir=${MODEL_DIR} \
        --checkpoint_path=${MODEL_DIR} \
        --mode=eval_once \
        --model=shapemask \
        --params_override="{eval: { val_json_file: ${VAL_JSON_FILE}, eval_file_pattern: ${EVAL_FILE_PATTERN}, eval_samples: 5000 }, shapemask_head: {use_category_for_mask: true, shape_prior_path: ${SHAPE_PRIOR_PATH}}, shapemask_parser: {output_size: [640, 640]}}"
    

    Command flag descriptions

    strategy_type
    To train the Shapemask model on a TPU, you must set the distribution_strategy to tpu.
    tpu
    The name of the Cloud TPU. This is set using the TPU_NAME environment variable.
    model_dir
    The directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints as long as the previous checkpoints were created using a Cloud TPU of the same size and TensorFlow version.
    mode
    Set this to train to train the model or eval to evaluate the model.
    params_override
    A JSON string that overrides default script parameters. For more information on script parameters, see /usr/share/models/official/legacy/detection/main.py.

    When the evaluation completes, a message similar to the following appears:

    DONE (t=5.47s).
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.000
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.000
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
    

    You have now completed single-device training and evaluation. Use the following steps to delete the current single-device TPU resources.

  13. Disconnect from the Compute Engine instance:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  14. Delete the TPU resource.

    TPU VM

    $ gcloud compute tpus tpu-vm delete shapemask-tutorial \
    --zone=europe-west4-a
    

    Command flag descriptions

    zone
    The zone where your Cloud TPU resided.

    TPU Node

    $ gcloud compute tpus execution-groups delete shapemask-tutorial \
    --tpu-only \
    --zone=europe-west4-a
    

    Command flag descriptions

    tpu-only
    Deletes only the Cloud TPU. The VM remains available.
    zone
    The zone that contains the TPU to delete.

    At this point, you can either conclude this tutorial and clean up, or you can continue and explore running the model on Cloud TPU Pods.

Scale your model with Cloud TPU Pods

Training your model on Cloud TPU Pods may require some changes to your training script. For information, see Training on TPU Pods.

TPU Pod training

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Create a variable for your project's ID.

    export PROJECT_ID=project-id
    
  3. Configure Google Cloud CLI to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_ID}
    

    The first time you run this command in a new Cloud Shell VM, an Authorize Cloud Shell page is displayed. Click Authorize at the bottom of the page to allow gcloud to make GCP API calls with your credentials.

  4. Create a Service Account for the Cloud TPU project.

    Service accounts allow the Cloud TPU service to access other Google Cloud Platform services.

    gcloud beta services identity create --service tpu.googleapis.com --project $PROJECT_ID
    

    The command returns a Cloud TPU Service Account with following format:

    service-PROJECT_NUMBER@cloud-tpu.iam.gserviceaccount.com
    

  5. Create a Cloud Storage bucket using the following command or use a bucket you created earlier for your project.

    gsutil mb -p ${PROJECT_ID} -c standard -l europe-west4 gs://bucket-name
    
  6. If you previously prepared the COCO dataset and moved it to your storage bucket, you can use it again for Pod training. If you have not yet prepared the COCO dataset, prepare it now and return here to set up the Pod training.

  7. Launch a Cloud TPU Pod

    This tutorial specifies a v3-32 Pod. For other Pod options, see the available TPU types page.

    TPU VM

    $ gcloud compute tpus tpu-vm create shapemask-tutorial \
    --zone=europe-west4-a \
    --accelerator-type=v3-32 \
    --version=tpu-vm-tf-2.16.1-pod-pjrt
    

    Command flag descriptions

    zone
    The zone where you plan to create your Cloud TPU.
    accelerator-type
    The type of the Cloud TPU to create.
    version
    The Cloud TPU software version.

    TPU Node

    $ gcloud compute tpus execution-groups create  \
     --zone=europe-west4-a \
     --name=shapemask-tutorial \
     --accelerator-type=v3-32 \
     --machine-type=n1-standard-8 \
     --disk-size=300 \
     --tf-version=2.12.0
    

    Command flag descriptions

    zone
    The zone where you plan to create your Cloud TPU.
    name
    The TPU name. If not specified, defaults to your username.
    accelerator-type
    The type of the Cloud TPU to create.
    machine-type
    The machine type of the Compute Engine VM to create.
    disk-size
    The root volume size of your Compute Engine VM (in GB).
    tf-version
    The version of Tensorflow gcloud installs on the VM.
  8. If you are not automatically logged in to the Compute Engine instance, log in by running the following ssh command. When you are logged into the VM, your shell prompt changes from username@projectname to username@vm-name:

    TPU VM

    gcloud compute tpus tpu-vm ssh shapemask-tutorial --zone=europe-west4-a
    

    TPU Node

    gcloud compute ssh shapemask-tutorial --zone=europe-west4-a
    

    As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

  9. Install TensorFlow requirements.

    TPU VM

    (vm)$ pip3 install -r /usr/share/tpu/models/official/requirements.txt
    

    TPU Node

    (vm)$ pip3 install -r /usr/share/models/official/requirements.txt
    
  10. The training script requires an extra package. Install it now:

    TPU VM

    (vm)$ pip3 install --user tensorflow-model-optimization>=0.1.3
    

    TPU Node

    (vm)$ pip3 install --user tensorflow-model-optimization>=0.1.3
    
  11. Set up the following environment variables, replacing bucket-name with the name of your Cloud Storage bucket:

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
    

    The training application expects your training data to be accessible in Cloud Storage. The training application also uses your Cloud Storage bucket to store checkpoints during training.

  12. Update the required training variables.

    (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/shapemask-pods
    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/coco
    (vm)$ export RESNET_CHECKPOINT=gs://cloud-tpu-checkpoints/retinanet/resnet50-checkpoint-2018-02-07
    (vm)$ export TRAIN_FILE_PATTERN=${DATA_DIR}/train-*
    (vm)$ export EVAL_FILE_PATTERN=${DATA_DIR}/val-*
    (vm)$ export VAL_JSON_FILE=${DATA_DIR}/instances_val2017.json
    (vm)$ export SHAPE_PRIOR_PATH=gs://cloud-tpu-checkpoints/shapemask/kmeans_class_priors_91x20x32x32.npy
    
  13. Set some required environment variables:

    TPU VM

    (vm)$ export PYTHONPATH="/usr/share/tpu/models:${PYTHONPATH}"
    (vm)$ export TPU_LOAD_LIBRARY=0
    

    TPU Node

    (vm)$ export PYTHONPATH="${PYTHONPATH}:/usr/share/models"
    
  14. Change to directory that stores the model:

    TPU VM

    (vm)$ cd /usr/share/tpu/models/official/legacy/detection
    

    TPU Node

    (vm)$ cd /usr/share/models/official/legacy/detection
    
  15. Start the Pod training.

    The sample training runs for just 20 steps and takes approximately 10 minutes to complete on a v3-32 TPU node. To train to convergence takes about 11,250 steps and approximately 2 hours on a v3-32 TPU Pod.

    (vm)$ python3 main.py \
     --strategy_type=tpu \
     --tpu=${TPU_NAME} \
     --model_dir=${MODEL_DIR} \
     --mode=train \
     --model=shapemask \
     --params_override="{train: { batch_size: 128, iterations_per_loop: 500, total_steps: 20, learning_rate: {'learning_rate_levels': [0.008, 0.0008], 'learning_rate_steps': [10000, 13000] }, checkpoint: { path: ${RESNET_CHECKPOINT}, prefix: resnet50/ }, train_file_pattern: ${TRAIN_FILE_PATTERN} }, eval: { val_json_file: ${VAL_JSON_FILE}, eval_file_pattern: ${EVAL_FILE_PATTERN}}, shapemask_head: {use_category_for_mask: true, shape_prior_path: ${SHAPE_PRIOR_PATH}} }"
    

    Command flag descriptions

    strategy_type
    To train the Shapemask model on a TPU, you must set the distribution_strategy to tpu.
    tpu
    The name of the Cloud TPU. This is set using the TPU_NAME environment variable.
    model_dir
    The directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints as long as the previous checkpoints were created using a Cloud TPU of the same size and TensorFlow version.
    mode
    Set this to train to train the model or eval to evaluate the model.
    params_override
    A JSON string that overrides default script parameters. For more information on script parameters, see /usr/share/models/official/legacy/detection/main.py.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  2. Delete your Cloud TPU and Compute Engine resources. The command you use to delete your resources depends upon whether you are using TPU VMs or TPU Nodes. For more information, see System Architecture.

    TPU VM

    $ gcloud compute tpus tpu-vm delete shapemask-tutorial \
    --zone=europe-west4-a
    

    TPU Node

    $ gcloud compute tpus execution-groups delete shapemask-tutorial \
    --zone=europe-west4-a
    
  3. Verify the resources have been deleted by running gcloud compute tpus execution-groups list. The deletion might take several minutes. The output from the following command should not include any of the TPU resources created in this tutorial:

    $ gcloud compute tpus execution-groups list --zone=europe-west4-a
    
  4. Run gsutil as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://bucket-name
    

What's next

Train with different image sizes

You can explore using a larger neural network (for example, ResNet-101 instead of ResNet-50). A larger input image and a more powerful neural network will yield a slower but more precise model.

Use a different basis

Alternatively, you can explore pre-training a ResNet model on your own dataset and using it as a basis for your ShapeMask model. With some more work, you can also swap in an alternative neural network in place of ResNet. Finally, if you are interested in implementing your own object detection models, this network may be a good basis for further experimentation.