This tutorial shows you how to train the Deeplab-v3 model on Cloud TPU.
The instructions below assume you are already familiar with running a model on Cloud TPU. If you are new to Cloud TPU, you can refer to the Quickstart for a basic introduction.
If you plan to train on a TPU Pod slice, review Training on TPU Pods to understand parameter changes required for Pod slices.
This model is an image semantic segmentation model. Image semantic segmentation models focus on identifying and localizing multiple objects in a single image. This type of model is frequently used in machine learning applications such as autonomous driving, geospatial image processing, and medical imaging.
In this tutorial, you'll run a training model against the PASCAL VOC 2012 dataset. For more information on this data set, see The PASCAL Visual Object Classes Homepage.
Objectives
- Create a Cloud Storage bucket to hold your dataset and model output.
- Install the required packages.
- Download and convert the PASCAL VOC 2012 dataset.
- Train the Deeplab model.
- Evaluate the Deeplab model.
Costs
This tutorial uses billable components of Google Cloud, including:
- Compute Engine
- Cloud TPU
- Cloud Storage
Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.
Before you begin
This section provides information on setting up Cloud Storage bucket and a Compute Engine VM.
Open a Cloud Shell window.
Create a variable for your project's ID.
export PROJECT_ID=project-id
Configure
gcloud
command-line tool to use the project where you want to create Cloud TPU.gcloud config set project ${PROJECT_ID}
The first time you run this command in a new Cloud Shell VM, an
Authorize Cloud Shell
page is displayed. ClickAuthorize
at the bottom of the page to allowgcloud
to make GCP API calls with your credentials.Create a Service Account for the Cloud TPU project.
gcloud beta services identity create --service tpu.googleapis.com --project $PROJECT_ID
The command returns a Cloud TPU Service Account with following format:
service-PROJECT_NUMBER@cloud-tpu.iam.gserviceaccount.com
Create a Cloud Storage bucket using the following command:
gsutil mb -p ${PROJECT_ID} -c standard -l us-central1 -b on gs://bucket-name
This Cloud Storage bucket stores the data you use to train your model and the training results.
In order for the Cloud TPU to read and write to the storage bucket, the Service Account for your project needs read/write or Admin permissions on it. See the section on storage buckets for how to view and set those permissions.
Launch a Compute Engine VM using the
ctpu up
command.$ ctpu up --project=${PROJECT_ID} \ --zone=us-central1-b \ --machine-type=n1-standard-8 \ --vm-only \ --tf-version=1.15.5 \ --name=deeplab-tutorial
Command flag descriptions
project
- Your GCP project ID
zone
- The zone where you plan to create your Cloud TPU.
machine-type
- The machine type of the Compute Engine VM to create.
vm-only
- Create a VM only. By default the
ctpu up
command creates a VM and a Cloud TPU. tf-version
- The version of Tensorflow
ctpu
installs on the VM. name
- The name of the Cloud TPU to create.
The configuration you specified appears. Enter y to approve or n to cancel.
When the
ctpu up
command has finished executing, verify that your shell prompt has changed fromusername@projectname
tousername@vm-name
. This change shows that you are now logged into your Compute Engine VM.gcloud compute ssh deeplab-tutorial --zone=us-central1-b
As you continue these instructions, run each command that
begins with (vm)$
in your VM session window.
Install additional packages
For this model, you need to install the following additional packages on your Compute Engine instance:
- jupyter
- matplotlib
- PrettyTable
- tf_slim
(vm)$ pip3 install --user jupyter
(vm)$ pip3 install --user matplotlib
(vm)$ pip3 install --user PrettyTable
(vm)$ pip3 install --user tf_slim
Create environment variables for your storage bucket and TPU name.
(vm)$ export STORAGE_BUCKET=gs://bucket-name
(vm)$ export TPU_NAME=deeplab-tutorial (vm)$ export DATA_DIR=${STORAGE_BUCKET}/deeplab_data (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/deeplab_model (vm)$ export PYTHONPATH=${PYTHONPATH}:/usr/share/models/research:/usr/share/models/research/slim
Prepare the data set
Download and convert the PASCAL VOC 2012 dataset
This model uses the PASCAL VOC 2012 dataset for training and evaluation. Run the following script to download the dataset and convert it to TensorFlow's
TFRecord
format:(vm)$ bash /usr/share/models/research/deeplab/datasets/download_and_convert_voc2012.sh
Download the pretrained checkpoint
In this step, you download the modified resnet 101 pretrained checkpoint. To start, download the checkpoint:
(vm)$ wget http://download.tensorflow.org/models/resnet_v1_101_2018_05_04.tar.gz
Then, extract the contents of the
tar
file:(vm)$ tar -vxf resnet_v1_101_2018_05_04.tar.gz
Upload data to your Cloud Storage bucket
At this point, you can now upload your data to the Cloud Storage bucket you created earlier:
(vm)$ gsutil -m cp -r pascal_voc_seg/tfrecord ${DATA_DIR}/tfrecord
(vm)$ gsutil -m cp -r resnet_v1_101 ${DATA_DIR}
Create Cloud TPU resource
Run the following command to create your Cloud TPU.
(vm)$ ctpu up --project=${PROJECT_ID} \
--tpu-only \
--tf-version=1.15.5 \
--tpu-size=v3-8 \
--name=deeplab-tutorial
Train the model
Run the training script for 2000 training steps. This will take approximately 20
minutes. To run to convergence, remove the --train_steps=2000
flag from the
training script command line. Running to convergence takes about 10 hours.
(vm)$ python3 /usr/share/tpu/models/experimental/deeplab/main.py \
--mode='train' \
--num_shards=8 \
--alsologtostderr=true \
--model_dir=${MODEL_DIR} \
--dataset_dir=${DATA_DIR}/tfrecord \
--init_checkpoint=${DATA_DIR}/resnet_v1_101/model.ckpt \
--model_variant=resnet_v1_101_beta \
--image_pyramid=1. \
--aspp_with_separable_conv=false \
--multi_grid=1 \
--multi_grid=2 \
--multi_grid=4 \
--decoder_use_separable_conv=false \
--train_split='train' \
--train_steps=2000 \
--tpu=${TPU_NAME}
Evaluating the model on a Cloud TPU device.
When the training completes, you can evaluate the model. To do so, change the
--mode
flag from train
to eval
:
(vm)$ python3 /usr/share/tpu/models/experimental/deeplab/main.py \
--mode='eval' \
--num_shards=8 \
--alsologtostderr=true \
--model_dir=${MODEL_DIR} \
--dataset_dir=${DATA_DIR}/tfrecord \
--init_checkpoint=${DATA_DIR}/resnet_v1_101/model.ckpt \
--model_variant=resnet_v1_101_beta \
--image_pyramid=1. \
--aspp_with_separable_conv=false \
--multi_grid=1 \
--multi_grid=2 \
--multi_grid=4 \
--decoder_use_separable_conv=false \
--train_split='train' \
--tpu=${TPU_NAME}
Cleaning up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Disconnect from the Compute Engine VM:
(vm)$ exit
Your prompt should now be
username@projectname
, showing you are in the Cloud Shell.In your Cloud Shell, run
ctpu delete
with the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:$ ctpu delete --project=${PROJECT_ID} \ --zone=us-central1-b \ --name=deeplab-tutorial
Run
ctpu status
to make sure you have no instances allocated to avoid unnecessary charges for TPU usage. The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:$ ctpu status --project=${PROJECT_ID} \ --name=deeplab-tutorial \ --zone=us-central1-b
2018/04/28 16:16:23 WARNING: Setting zone to "us-central1-b" No instances currently exist. Compute Engine VM: -- Cloud TPU: --
Run
gsutil
as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:$ gsutil rm -r gs://bucket-name
What's next
In this tutorial you have trained the DeepLab-v3 model using a sample dataset. The results of this training are (in most cases) not usable for inference. To use a model for inference you can train the data on a publicly available dataset or your own data set. Models trained on Cloud TPUs require datasets to be in TFRecord format.
You can use the dataset conversion tool sample to convert an image classification dataset into TFRecord format. If you are not using an image classification model, you will have to convert your dataset to TFRecord format yourself. For more information, see TFRecord and tf.Example
Hyperparameter tuning
To improve the model's performance with your dataset, you can tune the model's hyperparameters. You can find information about hyperparameters common to all TPU supported models on GitHub. Information about model-specific hyperparameters can be found in the source code for each model. For more information on hyperparameter tuning, see Overview of hyperparameter tuning, Using the Hyperparameter tuning service and Tune hyperparameters.
Inference
Once you have trained your model you can use it for inference (also called prediction). AI Platform is a cloud-based solution for developing, training, and deploying machine learning models. Once a model is deployed, you can use the AI Platform Prediction service.
- Learn more about
ctpu
, including how to install it on a local machine. - Experiment with more TPU samples.
- Explore the TPU tools in TensorBoard.