Running MNIST on Cloud TPU (TF 2.x)

This tutorial contains a high-level description of the MNIST model, instructions on downloading the MNIST TensorFlow TPU code sample, and a guide to running the code on Cloud TPU.

Disclaimer

This tutorial uses a third-party dataset. Google provides no representation, warranty, or other guarantees about the validity, or any other aspects of this dataset.

Model description

The MNIST dataset contains a large number of images of hand-written digits in the range 0 to 9, as well as the labels identifying the digit in each image.

This tutorial trains a machine learning model to classify images based on the MNIST dataset. After training, the model classifies incoming images into 10 categories (0 to 9) based on what it's learned about handwritten images from the MNIST dataset. You can then send the model an image that it hasn't seen before, and the model identifies the digit in the image based on what the model has learned during training.

The MNIST dataset has been split into three parts:

  • 60,000 examples of training data
  • 10,000 examples of test data
  • 5,000 examples of validation data

You can find more information about the dataset at the MNIST database site.

The model has a mixture of seven layers:

  • 2 x convolution
  • 2 x max pooling
  • 2 x dense (fully connected)
  • 1 x dropout

Loss is computed via categorical cross entropy.

This version of the MNIST model uses the Keras API, a recommended way to build and run a machine learning model on a Cloud TPU.

Keras simplifies the model development process by hiding most of the low-level implementation, which also makes it easy to switch between TPU and other test platforms such as GPUs or CPUs.

Objectives

  • Create a Cloud Storage bucket to hold your dataset and model output.
  • Run the training job.
  • Verify the output results.

Costs

This tutorial uses billable components of Google Cloud, including:

  • Compute Engine
  • Cloud TPU
  • Cloud Storage

Use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial.

Before you begin

This section provides information on setting up Cloud Storage bucket and a Compute Engine VM.

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Create a variable for your project's ID.

    export PROJECT_ID=project-id
    
  3. Configure gcloud command-line tool to use the project where you want to create Cloud TPU.

    gcloud config set project ${PROJECT_ID}
    

    The first time you run this command in a new Cloud Shell VM, an Authorize Cloud Shell page is displayed. Click Authorize at the bottom of the page to allow gcloud to make GCP API calls with your credentials.

  4. Create a Service Account for the Cloud TPU project.

    gcloud beta services identity create --service tpu.googleapis.com --project $PROJECT_ID
    

    The command returns a Cloud TPU Service Account with following format:

    service-PROJECT_NUMBER@cloud-tpu.iam.gserviceaccount.com
    

  5. Create a Cloud Storage bucket using the following command:

    gsutil mb -p ${PROJECT_ID} -c standard -l us-central1 -b on gs://bucket-name
    

    This Cloud Storage bucket stores the data you use to train your model and the training results. The ctpu up tool used in this tutorial sets up default permissions for the Cloud TPU Service Account you set up in the previous step. If you want finer-grain permissions, review the access level permissions.

  6. Launch a Compute Engine VM and Cloud TPU using the ctpu up command.

    $ ctpu up --project=${PROJECT_ID} \
     --zone=us-central1-b \
     --tf-version=2.3.1 \
     --name=mnist-tutorial
    

    Command flag descriptions

    project
    Your GCP project ID
    zone
    The zone where you plan to create your Cloud TPU.
    tf-version
    The version of Tensorflow ctpu installs on the VM.
    name
    The name of the Cloud TPU to create.

    For more information on the CTPU utility, see the CTPU Reference.

  7. The configuration you specified appears. Enter y to approve or n to cancel.

  8. When the ctpu up command has finished executing, verify that your shell prompt has changed from username@projectname to username@vm-name. This change shows that you are now logged into your Compute Engine VM.

    gcloud compute ssh mnist-tutorial --zone=us-central1-b
    

    As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

  9. Create an environment variable for the TPU name.

    (vm)$ export TPU_NAME=mnist-tutorial
    
  10. Install an extra package.

    The MNIST training application requires an extra package. Install it now:

    (vm)$ sudo pip3 install tensorflow-model-optimization>=0.1.3
    

Single Cloud TPU device training

The source code for the MNIST TPU model is available on GitHub.

  1. Set the following variables. Replace bucket-name with your bucket name:

    (vm)$ export STORAGE_BUCKET=gs://bucket-name
    
    (vm)$ export MODEL_DIR=${STORAGE_BUCKET}/mnist
    (vm)$ export DATA_DIR=${STORAGE_BUCKET}/data
    (vm)$ export PYTHONPATH="${PYTHONPATH}:/usr/share/models"
    
  2. Change to directory that stores the model:

    (vm)$ cd /usr/share/models/official/vision/image_classification
    
  3. Run the MNIST training script:

    (vm)$ python3 mnist_main.py \
      --tpu=${TPU_NAME} \
      --model_dir=${MODEL_DIR} \
      --data_dir=${DATA_DIR} \
      --train_epochs=10 \
      --distribution_strategy=tpu \
      --download
    

    Command flag descriptions

    tpu
    The name of the Cloud TPU. If not specified when setting up the Compute Engine VM and Cloud TPU, defaults to your username.
    model_dir
    This is the directory that contains the model files. This tutorial uses a folder within the Cloud Storage bucket. You do not have to create this folder beforehand. The script creates the folder if it does not exist.
    data_dir
    The Cloud Storage path of training input. It is set to the fake_imagenet dataset in this example.
    train_epochs
    The number of times to train the model using the entire dataset.
    distribution_strategy
    To train the ResNet model on a Cloud TPU, set distribution_strategy to tpu.
    download
    When set to true, the script downloads and preprocesses the MNIST dataset, if it hasn't been downloaded already.

The training script runs in under 5 minutes on a v3-8 Cloud TPU and displays output similar to:

I1203 03:43:15.936553 140096948798912 mnist_main.py:165]
Run stats: {'loss': 0.11427700750786683, 'training_accuracy_top_1': 0.9657697677612305,
'accuracy_top_1': 0.9730902910232544, 'eval_loss': 0.08600160645114051}

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  2. In your Cloud Shell, run ctpu delete with the --name and --zone flags you used when you set up the Compute Engine VM and Cloud TPU. This deletes both your VM and your Cloud TPU.

    $ ctpu delete --project=${PROJECT_ID} \
      --name=mnist-tutorial \
      --zone=us-central1-b
    
  3. Run ctpu status to make sure you have no instances allocated to avoid unnecessary charges for TPU usage. The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    $ ctpu status --project=${PROJECT_ID}} \
      --name=mnist-tutorial \
      --zone=us-central1-b
    
    2018/04/28 16:16:23 WARNING: Setting zone to "us-central1-b"
    No instances currently exist.
        Compute Engine VM:     --
        Cloud TPU:             --
    
  4. Run gsutil as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://bucket-name
    

What's next

In this tutorial you have trained the MNIST model using a sample dataset. The results of this training are (in most cases) not usable for inference. To use a model for inference you can train the data on a publicly available dataset or your own data set. Models trained on Cloud TPUs require datasets to be in TFRecord format.

You can use the dataset conversion tool sample to convert an image classification dataset into TFRecord format. If you are not using an image classification model, you will have to convert your dataset to TFRecord format yourself. For more information, see TFRecord and tf.Example

Hyperparameter tuning

To improve the model's performance with your dataset, you can tune the model's hyperparameters. You can find information about hyperparameters common to all TPU supported models on GitHub. Information about model-specific hyperparameters can be found in the source code for each model. For more information on hyperparameter tuning, see Overview of hyperparameter tuning, Using the Hyperparameter tuning service and Tune hyperparameters.

Inference

Once you have trained your model you can use it for inference (also called prediction). AI Platform is a cloud-based solution for developing, training, and deploying machine learning models. Once a model is deployed, you can use the AI Platform Prediction service.