Running Inception on Cloud TPU

This tutorial shows you how to train the Inception model on Cloud TPU.

Disclaimer

This tutorial uses a third-party dataset. Google provides no representation, warranty, or other guarantees about the validity, or any other aspects of, this dataset.

Model description

Inception v3 is a widely-used image recognition model that can attain significant accuracy. The model is the culmination of many ideas developed by multiple researchers over the years. It is based on the original paper: "Rethinking the Inception Architecture for Computer Vision" by Szegedy, et. al.

The model has a mixture of symmetric and asymmetric building blocks, including:

  • convolutions
  • average pooling
  • max pooling
  • concats
  • dropouts
  • fully connected layers

Loss is computed via Softmax.

The following picture shows the model at a high level:

image

You can find more information about the model at GitHub.

The model is built using the high-level Estimator API.

This API greatly simplifies model creation by encapsulating most low-level functions, allowing users to focus on model development, not the inner workings of the underlying hardware that runs things.

Before you begin

Before starting this tutorial, check that your Google Cloud Platform project is correctly set up.

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. Select or create a GCP project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. This walkthrough uses billable components of Google Cloud Platform. Check the Cloud TPU pricing page to estimate your costs, and follow the instructions to clean up resources when you've finished with them.

Create a Cloud Storage bucket

You need a Cloud Storage bucket to store the data that you use to train your machine learning model and the results of the training.

  1. Go to the Cloud Storage page on the GCP Console.

    Go to the Cloud Storage page

  2. Create a new bucket, specifying the following options:

    • A unique name of your choosing.
    • Default storage class: Regional
    • Location: us-central1

Open Cloud Shell and use the ctpu tool

This guide uses the Cloud TPU Provisioning Utility (ctpu) as a simple tool for setting up and managing your Cloud TPU. The guide runs ctpu from a Cloud Shell. For more advanced setup options, see the custom setup.

The ctpu tool is pre-installed in your Cloud Shell. Follow these steps to check your ctpu configuration:

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Type the following into your Cloud Shell, to check your ctpu configuration:

    $ ctpu print-config
    

    You should see a message like this:

    2018/04/29 05:23:03 WARNING: Setting zone to "us-central1-b"
    ctpu configuration:
            name: [your TPU's name]
            project: [your-project-name]
            zone: us-central1-b
    If you would like to change the configuration for a single command invocation, please use the command line flags.
    

  3. Take a look at the ctpu commands:

    $ ctpu

    You should see a usage guide, including a list of subcommands and flags with a brief description of each one.

Create a Compute Engine VM and a Cloud TPU

Run the following command to set up a Compute Engine virtual machine (VM) and a Cloud TPU with associated services. This combination of resources and services is called a Cloud TPU flock:

$ ctpu up [optional: --name --zone]

You should see a message like this:

ctpu will use the following configuration: 
   Name: [your TPU's name]
   Zone: [your project's zone]
   GCP Project: [your project's name]
   TensorFlow Version: 1.9
   VM:
     Machine Type: [your machine type]
     Disk Size: [your disk size]
     Preemptible: [true or false]
   Cloud TPU:
     Size: [your TPU size]
     Preemptible: [true or false]
OK to create your Cloud TPU resources with the above configuration? [Yn]:

Press y to create your Cloud TPU resources.

The ctpu up command performs the following tasks:

  • Enables the Compute Engine and Cloud TPU services.
  • Creates a Compute Engine VM with the latest stable TensorFlow version pre-installed. The default zone is us-central1-b. For reference, Cloud TPU is available in the following zones:

    • United States (US)
    • Europe (EU)
      • europe-west4-a
    • Asia Pacific (APAC)
      • asia-east1-c

  • Creates a Cloud TPU with the corresponding version of TensorFlow, and passes the name of the Cloud TPU to the Compute Engine VM as an environment variable (TPU_NAME).

  • Ensures your Cloud TPU has access to resources it needs from your GCP project, by granting specific IAM roles to your Cloud TPU service account.
  • Performs a number of other checks.
  • Logs you in to your new Compute Engine VM.

You can run ctpu up as often as you like. For example, if you lose the SSH connection to the Compute Engine VM, run ctpu up to restore the connection, specifying --name and --zone if you changed the default values. See the ctpu documentation for details.

From this point on, a prefix of (vm)$ means you should run the command on the Compute Engine VM instance.

Verify your Compute Engine VM

When the ctpu up command has finished executing, verify that your shell prompt has changed from username@project to username@tpuname. This change shows that you are now logged into your Compute Engine VM.

Use the default or change the Cloud Storage access permissions

The ctpu up command set up default permissions for your Cloud TPU service account. If you want finer-grain permissions, review and update the access level permissions.

Get the data

Set up the following environment variable, replacing YOUR-BUCKET-NAME with the name of your Cloud Storage bucket:

(vm)$ export STORAGE_BUCKET=gs://YOUR-BUCKET-NAME

The training application expects your training data to be accessible in Cloud Storage. The training application also uses your Cloud Storage bucket to store checkpoints during training.

There are two datasets you can use, a randomly-generated fake dataset or the full ImageNet dataset. A DATA_DIR environment variable described below is used to specify which dataset to train on.

Note that the fake dataset is only useful for understanding how to use a Cloud TPU, and validating end-to-end performance. The accuracy numbers and saved model will not be meaningful.

The fake dataset is at this location on Cloud Storage:

gs://cloud-tpu-test-datasets/fake_imagenet

Set up TensorBoard

Before training the model, start TensorBoard in the background so you can visualize your training program's progress:

(vm)$ tensorboard --logdir=${STORAGE_BUCKET}/inception &

When you ran ctpu up, the tool automatically set up port forwarding for the Cloud Shell environment to make TensorBoard available.

Click the Web preview button in Cloud Shell and open port 8080.

Run the model

You are now ready to train and evaluate the Inception v3 model using ImageNet data.

The Inception v3 model is pre-installed on your Compute Engine VM, in the /usr/share/tpu/models/experimental/inception/ directory.

In the following steps, a prefix of (vm)$ means you should run the command on your Compute Engine VM:

  1. Set up a DATA_DIR environment variable containing one of the following values:

    • If you are using the fake dataset:

      (vm)$ export DATA_DIR=gs://cloud-tpu-test-datasets/fake_imagenet
      

    • If you have uploaded a set of training data to your Cloud Storage bucket:

      (vm)$ export DATA_DIR=${STORAGE_BUCKET}/data
      

  2. Run the Inception v3 model:

    (vm)$ python /usr/share/tpu/models/experimental/inception/inception_v3.py \
        --tpu=$TPU_NAME \
        --learning_rate=0.165 \
        --train_steps=250000 \
        --iterations=500 \
        --use_tpu=True \
        --use_data=real \
        --mode=train_and_eval \
        --train_steps_per_eval=2000 \
        --data_dir=${DATA_DIR} \
        --model_dir=${STORAGE_BUCKET}/inception

    • --tpu specifies the name of the Cloud TPU. Note that ctpu passes this name to the Compute Engine VM as an environment variable (TPU_NAME).
    • --use_data specifies which type of data the program must use during training, either fake or real. The default value is fake.
    • --data_dir specifies the Cloud Storage path for training input. The application ignores this parameter when you're using fake data.
    • --model_dir specifies the directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints.

What to expect

Inception v3 operates on 299x299 images. The default training batchsize is 1024, which means that each iteration operates on 1024 of those images.

You can use the --mode flag to select one of three modes of operation: train, eval, and train_and_eval:

  • --mode=train or --mode=eval specifies either a training-only or an evaluation-only job.
  • --mode=train_and_eval specifies a hybrid job that does both training and evaluation.

Train-only jobs run for the specified number of steps defined in train_steps and can go through the entire training set, if desired.

Train_and_eval jobs cycle though training and evaluation segments. Each training cycle runs for train_steps_per_eval and is followed by an evaluation job (using the weights that have been trained up to that point).

The number of training cycles is defined by the floor function of train_steps divided by train_steps_per_eval.

floor(train_steps / train_steps_per_eval)

By default, Estimator API-based models report loss values every certain number of steps. The reporting format is along the lines of:

step = 15440, loss = 12.6237

Discussion: TPU-specific modifications to the model

The specific modifications required to get Estimator API-based models ready for TPUs are surprisingly minimal. The program imports the following libraries:

from google.third_party.tensorflow.contrib.tpu.python.tpu import tpu_config
from google.third_party.tensorflow.contrib.tpu.python.tpu import tpu_estimator
from google.third_party.tensorflow.contrib.tpu.python.tpu import tpu_optimizer

The CrossShardOptimizer function wraps the optimizer, as in:

if FLAGS.use_tpu:
  optimizer = tpu_optimizer.CrossShardOptimizer(optimizer)

The function that defines the model returns an Estimator specification using:

return tpu_estimator.TPUEstimatorSpec(
    mode=mode, loss=loss, train_op=train_op, eval_metrics=eval_metrics)

The main function defines an Estimator-compatible configuration using:

run_config = tpu_config.RunConfig(
    master=tpu_grpc_url,
    evaluation_master=tpu_grpc_url,
    model_dir=FLAGS.model_dir,
    save_checkpoints_secs=FLAGS.save_checkpoints_secs,
    save_summary_steps=FLAGS.save_summary_steps,
    session_config=tf.ConfigProto(
        allow_soft_placement=True,
        log_device_placement=FLAGS.log_device_placement),
    tpu_config=tpu_config.TPUConfig(
        iterations_per_loop=iterations,
        num_shards=FLAGS.num_shards,
        per_host_input_for_training=per_host_input_for_training))

The program uses this defined configuration and a model definition function to create an Estimator object:

inception_classifier = tpu_estimator.TPUEstimator(
    model_fn=inception_model_fn,
    use_tpu=FLAGS.use_tpu,
    config=run_config,
    params=params,
    train_batch_size=FLAGS.train_batch_size,
    eval_batch_size=eval_batch_size,
    batch_axis=(batch_axis, 0))

Train-only jobs need only to call the train function:

inception_classifier.train(
    input_fn=imagenet_train.input_fn, steps=FLAGS.train_steps)

Evaluation-only jobs get their data from available checkpoints and wait until a new one becomes available:

for checkpoint in get_next_checkpoint():
  eval_results = inception_classifier.evaluate(
      input_fn=imagenet_eval.input_fn,
      steps=eval_steps,
      hooks=eval_hooks,
      checkpoint_path=checkpoint)

When you choose the option train_and_eval, the training and the evaluation jobs run in parallel. During evaluation, trainable variables are loaded from the latest available checkpoint. Training and evaluation cycles repeat as you specify in the flags::

for cycle in range(FLAGS.train_steps // FLAGS.train_steps_per_eval):
  inception_classifier.train(
      input_fn=imagenet_train.input_fn, steps=FLAGS.train_steps_per_eval)

  eval_results = inception_classifier.evaluate(
      input_fn=imagenet_eval.input_fn, steps=eval_steps, hooks=eval_hooks)

Clean up

  1. Disconnect from the Compute Engine VM:

    (vm)$ exit
    

    Your prompt should now be user@projectname, showing you are in your Cloud Shell.

  2. In your Cloud Shell, run the following command to delete your Compute Engine VM and your Cloud TPU:

    $ ctpu delete
    

  3. Run ctpu status to make sure you have no instances allocated to avoid unnecessary charges for TPU usage. The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    2018/04/28 16:16:23 WARNING: Setting zone to "us-central1-b"
    No instances currently exist.
            Compute Engine VM:     --
            Cloud TPU:             --
    

  4. When you no longer need the Cloud Storage bucket you created during this tutorial, use the gsutil command to delete it. Replace YOUR-BUCKET-NAME with the name of your Cloud Storage bucket:

    $ gsutil rm -r gs://YOUR-BUCKET-NAME
    

    See the Cloud Storage pricing guide for free storage limits and other pricing information.

Using the full ImageNet dataset

You need about 300GB of space available on your local machine or VM to run the script used in this section.

If you decide to process the data on your Compute Engine VM, follow these steps to add disk space to the VM:

  • Follow the Compute Engine guide to add a disk to your VM.
  • Set the disk size to 300GB or more.
  • Set When deleting instance to Delete disk to ensure that the disk is removed when you remove the VM.
  • Make a note of the path to your new disk. For example: /mnt/disks/mnt-dir.

To download and convert the full ImageNet dataset, and upload it to Cloud Storage:

  1. Sign up for an ImageNet account. Remember the username and password you used to create the account.

  2. Set up a DATA_DIR environment variable pointing to a path on your Cloud Storage bucket:

    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/data
    

  3. Download the imagenet_to_gcs.py script from GitHub:

    $ wget https://raw.githubusercontent.com/tensorflow/tpu/master/tools/datasets/imagenet_to_gcs.py
    

  4. Set a SCRATCH_DIR variable to contain the script's working files. The variable must specify a location on your local machine or on your Compute Engine VM. For example, on your local machine:

    $ SCRATCH_DIR=./imagenet_tmp_files
    

    Or if you're processing the data on the VM:

    (vm)$ SCRATCH_DIR=/mnt/disks/mnt-dir/imagenet_tmp_files
    

  5. Run the imagenet_to_gcs.py script to download, format, and upload the ImageNet data to the bucket. Replace YOUR-USERNAME and YOUR-PASSWORD with the username and password you used to create your ImageNet account.

    $ pip install google-cloud-storage
    $ python imagenet_to_gcs.py \
      --project=$PROJECT \
      --gcs_output_path=$DATA_DIR \
      --local_scratch_dir=$SCRATCH_DIR \
      --imagenet_username=YOUR-USERNAME \
      --imagenet_access_key=YOUR-PASSWORD
    

Note: Downloading and preprocessing the data can take up to half a day, depending on your network and computer speed. Do not interrupt the script.

When the script finishes processing, a message like the following appears:

2018-02-17 14:30:17.287989: Finished writing all 1281167 images in data set.

The script produces a series of directories (for both training and validation) of the form:

${DATA_DIR}/train-00000-of-01024
${DATA_DIR}/train-00001-of-01024
 ...
${DATA_DIR}/train-01023-of-01024

and

${DATA_DIR}/validation-00000-of-00128
S{DATA_DIR}/validation-00001-of-00128
 ...
${DATA_DIR}/validation-00127-of-00128

Inception v4

The Inception v4 model is a deep neural network model that uses Inception v3 building blocks to achieve higher accuracy than Inception v3. It is described in the paper "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning" by Szegedy et. al.

The Inception v4 model is pre-installed on your Compute Engine VM, in the /usr/share/tpu/models/experimental/inception/ directory.

In the following steps, a prefix of (vm)$ means you should run the command on your Compute Engine VM:

  1. If you have TensorBoard running in your Cloud Shell tab, you need another tab to work in. Open another tab in your Cloud Shell, and use ctpu in the new shell to connect to your Compute Engine VM:

    $ ctpu up

  2. Set up a DATA_DIR environment variable containing one of the following values:

    • If you are using the fake dataset:

      (vm)$ export DATA_DIR=gs://cloud-tpu-test-datasets/fake_imagenet
      

    • If you have uploaded a set of training data to your Cloud Storage bucket:

      (vm)$ export DATA_DIR=${STORAGE_BUCKET}/data
      

  3. Run the Inception v4 model:

    (vm)$ python /usr/share/tpu/models/experimental/inception/inception_v4.py \
        --tpu=$TPU_NAME \
        --learning_rate=0.36 \
        --train_steps=1000000 \
        --iterations=500 \
        --use_tpu=True \
        --use_data=real \
        --train_batch_size=256 \
        --mode=train_and_eval \
        --train_steps_per_eval=2000 \
        --data_dir=${DATA_DIR} \
        --model_dir=${STORAGE_BUCKET}/inception

    • --tpu specifies the name of the Cloud TPU. Note that ctpu passes this name to the Compute Engine VM as an environment variable (TPU_NAME).
    • --use_data specifies which type of data the program must use during training, either fake or real. The default value is fake.
    • --train_batch_size specifies the train batch size to be 256. As the Inception v4 model is larger than Inception v3, it must be run at a smaller batch size per TPU core.
    • --data_dir specifies the Cloud Storage path for training input. The application ignores this parameter when you're using fake data.
    • --model_dir specifies the directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...