Overview: This quickstart provides a brief introduction to working with Cloud TPU. In this quickstart, you use Cloud TPU to run MNIST, a canonical dataset of hand-written digits that is often used to test new machine learning approaches.

This topic is intended for users new to Cloud TPU. For a more detailed exploration of Cloud TPU, try running one of our colabs. You can also view one of the many examples in the Tutorials section.

Before you begin

Before starting this tutorial, check that your Google Cloud project is correctly set up.

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Cloud Console, on the project selector page, select or create a Google Cloud project.

    Go to the project selector page

  3. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.

  4. This walkthrough uses billable components of Google Cloud. Check the Cloud TPU pricing page to estimate your costs. Be sure to clean up resources you create when you've finished with them to avoid unnecessary charges.

Set up your resources

This section provides information on setting up Cloud Storage storage, VM, and Cloud TPU resources for tutorials.

Create a Cloud Storage bucket

You need a Cloud Storage bucket to store the data you use to train your model and the training results. The ctpu up tool used in this tutorial sets up default permissions for the Cloud TPU service account. If you want finer-grain permissions, review the access level permissions.

The bucket location must be in the same region as your virtual machine (VM) and your TPU node. VMs and TPU nodes are located in specific zones, which are subdivisions within a region.

  1. Go to the Cloud Storage page on the Cloud Console.

    Go to the Cloud Storage page

  2. Create a new bucket, specifying the following options:

    • A unique name of your choosing.
    • Default storage class: Standard
    • Location: Specify a bucket location in the same region where you plan to create your TPU node. See TPU types and zones to learn where various TPU types are available.

Use the ctpu tool

This section demonstrates using the Cloud TPU provisioning tool (ctpu) for creating and managing Cloud TPU project resources. The resources are comprised of a virtual machine (VM) and a Cloud TPU resource that have the same name. These resources must reside in the same region/zone as the bucket you just created.

You can also set up your VM and TPU resources using gcloud commands or through the Cloud Console. See the creating and deleting TPUs page to learn all the ways you can set up and manage your Compute Engine VM and Cloud TPU resources.

Run ctpu up to create resources

  1. Open a Cloud Shell window.

    Open Cloud Shell

  2. Run gcloud config set project <Your-Project> to use the project where you want to create Cloud TPU.

  3. Run ctpu up specifying the flags shown for either a Cloud TPU device or Pod slice. Refer to CTPU Reference for flag options and descriptions.

  4. Set up a Cloud TPU device:

    $ ctpu up 
  5. The configuration you specified appears. Enter y to approve or n to cancel.

  6. When the ctpu up command has finished executing, verify that your shell prompt has changed from username@project to username@tpuname. This change shows that you are now logged into your Compute Engine VM.

    gcloud compute ssh vm-name --zone=europe-west4-a \
    (vm)$ export TPU_NAME=tpu-name

As you continue these instructions, run each command that begins with (vm)$ in your VM session window.

Get the data

The MNIST dataset is hosted on the MNIST database site. Follow the instructions below to download and convert the data to the required format, and to upload the converted data to Cloud Storage.

Download and convert the MNIST data

The convert_to_records.py script downloads the data and converts it to the TFRecord format expected by the example MNIST model.

Use the following commands to run the script and decompress the files:

(vm)$ python /usr/share/tensorflow/tensorflow/examples/how_tos/reading_data/convert_to_records.py --directory=./data
(vm)$ gunzip ./data/*.gz

Upload the data to Cloud Storage

Upload the data to your Cloud Storage bucket so that the TPU server can access the data. When setting the variable in the commands below, replace YOUR-BUCKET-NAME with the name of your Cloud Storage bucket:

(vm)$ gsutil cp -r ./data ${STORAGE_BUCKET}

Run the MNIST TPU model

The MNIST TPU model is pre-installed on your Compute Engine VM in the following directory:


The source code for the MNIST TPU model is also available on GitHub. You can run the model on a Cloud TPU. Alternatively, see how to run the model on a local machine.

Running the model on Cloud TPU

In the following steps, a prefix of (vm)$ means you should run the command on your Compute Engine VM:

  1. Run the MNIST model:

    (vm)$ python /usr/share/models/official/mnist/mnist_tpu.py \
      --tpu=$TPU_NAME \
      --data_dir=${STORAGE_BUCKET}/data \
      --model_dir=${STORAGE_BUCKET}/output \
      --use_tpu=True \
      --iterations=500 \
    • --tpu specifies the name of the Cloud TPU. Note that ctpu passes this name to the Compute Engine VM as an environment variable (TPU_NAME). However, if you lose your connection to the VM, you can reconnect by running ctpu up again. TPU_NAME will not be set if you connect to the VM by running gcloud compute ssh.
    • --data_dir specifies the Cloud Storage path for training input.
    • --model_dir specifies the directory where checkpoints and summaries are stored during model training. If the folder is missing, the program creates one. When using a Cloud TPU, the model_dir must be a Cloud Storage path (gs://...). You can reuse an existing folder to load current checkpoint data and to store additional checkpoints.
    • --iterations specifies the number of training steps to run on the TPU on each call before returning control to python. If this number is too small (for example, less than 100) then this can result in excessive communication overhead which negatively impacts performance.
    • --train_steps specifies the total number of steps (batches) for training to run.

Running the model on a local (non-TPU) machine

To run the model on a non-TPU machine, omit --tpu, and set the following flag:


This causes the computation to land on a GPU if one is present. If no GPU is present, the computation falls back to the CPU.

What to expect

By default, the tf.estimator model reports loss value and step time in the following format:

    INFO:tensorflow:Calling model_fn.
    INFO:tensorflow:Create CheckpointSaverHook.
    INFO:tensorflow:Done calling model_fn.
    INFO:tensorflow:TPU job name tpu_worker
    INFO:tensorflow:Graph was finalized.
    INFO:tensorflow:Running local_init_op.
    INFO:tensorflow:Done running local_init_op.
    INFO:tensorflow:Init TPU system
    INFO:tensorflow:Start infeed thread controller
    INFO:tensorflow:Starting infeed thread controller.
    INFO:tensorflow:Start outfeed thread controller
    INFO:tensorflow:Starting outfeed thread controller.
    INFO:tensorflow:Enqueue next (500) batch(es) of data to infeed.
    INFO:tensorflow:Dequeue next (500) batch(es) of data from outfeed.
    INFO:tensorflow:Saving checkpoints for 500 into gs://ctpu-mnist-test/output/model.ckpt.
    INFO:tensorflow:loss = 0.08896458, step = 0
    INFO:tensorflow:loss = 0.08896458, step = 0
    INFO:tensorflow:Enqueue next (500) batch(es) of data to infeed.
    INFO:tensorflow:Dequeue next (500) batch(es) of data from outfeed.
    INFO:tensorflow:Enqueue next (500) batch(es) of data to infeed.
    INFO:tensorflow:Dequeue next (500) batch(es) of data from outfeed.
    INFO:tensorflow:global_step/sec: 242.829
    INFO:tensorflow:examples/sec: 248715
    INFO:tensorflow:Enqueue next (500) batch(es) of data to infeed.
    INFO:tensorflow:Dequeue next (500) batch(es) of data from outfeed.
    INFO:tensorflow:Saving checkpoints for 2000 into gs://ctpu-mnist-test/output/model.ckpt.
    INFO:tensorflow:Stop infeed thread controller
    INFO:tensorflow:Shutting down InfeedController thread.
    INFO:tensorflow:InfeedController received shutdown signal, stopping.
    INFO:tensorflow:Infeed thread finished, shutting down.
    INFO:tensorflow:Stop output thread controller
    INFO:tensorflow:Shutting down OutfeedController thread.
    INFO:tensorflow:OutfeedController received shutdown signal, stopping.
    INFO:tensorflow:Outfeed thread finished, shutting down.
    INFO:tensorflow:Shutdown TPU system.
    INFO:tensorflow:Loss for final step: 0.044236258.

Clean up

To avoid incurring charges to your GCP account for the resources used in this topic:

  1. Disconnect from the Compute Engine VM:

    (vm)$ exit

    Your prompt should now be user@projectname, showing you are in the Cloud Shell.

  2. In your Cloud Shell, run ctpu delete with the --zone flag you used when you set up the Cloud TPU to delete your Compute Engine VM and your Cloud TPU:

    $ ctpu delete [optional: --zone]
  3. Run ctpu status to make sure you have no instances allocated to avoid unnecessary charges for TPU usage. The deletion might take several minutes. A response like the one below indicates there are no more allocated instances:

    $ ctpu status --zone=europe-west4-a
    2018/04/28 16:16:23 WARNING: Setting zone to "--zone=europe-west4-a"
    No instances currently exist.
        Compute Engine VM:     --
        Cloud TPU:             --
  4. Run gsutil as shown, replacing bucket-name with the name of the Cloud Storage bucket you created for this tutorial:

    $ gsutil rm -r gs://bucket-name

What's next

This quickstart provided you with a brief introduction to working with Cloud TPU. At this point, you have the foundation for the following:

  • Learning more about Cloud TPU
  • Setting up Cloud TPU for your own applications

Learning more

MNIST on Keras Try out using Cloud TPU by running the MNIST model in a colab environment.
Product Overview Review the key features and benefits of Cloud TPU.
Cloud Tensor Processing Units (TPUs) Read more about Cloud TPU, its capabilities, and its advantages.
Pricing Review the pricing information for Cloud TPU.

Setting up

Choosing a TPU service Understand different options for working with Cloud TPU, such as Compute Engine, Google Kubernetes Engine, or AI Platform.
TPU types and zones Learn what TPU types are available in each zone.
TPU versions Understand the different TPU versions and learn how to select the right one for your application.
Var denne side nyttig? Giv os en anmeldelse af den:

Send feedback om...

Har du brug for hjælp? Besøg vores supportside.