TPU regions and zones

Overview

The main differences between TPU types are price, performance, memory capacity, and zonal availability.

Google Cloud uses regions, subdivided into zones, to define the geographic location of physical compute resources. For example, the us-central1 region denotes a region near the geographic center of the United States. When you create a TPU VM, you specify the zone in which you want to create it. See the Compute Engine Global, regional, and zonal resources document for more information about regional and zonal resources.

You can create TPU configurations in the zones shown in the following table.

US

TPU type (v2) TPU v2 cores Available zones
v2-8 8 us-central1-b
us-central1-c
us-central1-f
v2-32 32 us-central1-a
v2-128 128 us-central1-a
v2-256 256 us-central1-a
v2-512 512 us-central1-a
TPU type (v3) TPU v3 cores Available zones
v3-8 8 us-central1-a
us-central1-b
us-central1-f
TPU type (v4) TPU v4 chips Available zones
All v4 configurations varies by slice size us-central2-b
TPU type (v5e) TPU v5e chips Available zones
All v5litepod configurations varies by slice size us-central1-a
us-east5-a
us-east5-b
us-east5-c
us-south1-a
us-west1-c
us-west4-a
us-west4-b
TPU type (v5p) TPU v5p chips Available zones
All v5p configurations varies by slice size us-east5-a
TPU type (v6e) TPU v6e chips Available zones
All v6e configurations varies by slice size us-east1-d
us-east5-b

Europe

TPU type (v2) TPU v2 cores Available zones
v2-8 8 europe-west4-a
v2-32 32 europe-west4-a
v2-128 128 europe-west4-a
v2-256 256 europe-west4-a
v2-512 512 europe-west4-a
TPU type (v3) TPU v3 cores Available zones
v3-8 8 europe-west4-a
v3-32 32 europe-west4-a
v3-64 64 europe-west4-a
v3-128 128 europe-west4-a
v3-256 256 europe-west4-a
v3-512 512 europe-west4-a
v3-1024 1024 europe-west4-a
v3-2048 2048 europe-west4-a
TPU type (v5e) TPU v5e chips Available zones
v5lite-1 1 europe-west4-b
v5lite-4 4 europe-west4-b
v5lite-8 8 europe-west4-b
All v5litepod configurations varies by slice size europe-west1-b
europe-west4-a
europe-west4-b
TPU type (v5p) TPU v5p chips Available zones
All v5p configurations varies by slice size europe-west4-b
TPU type (v6e) TPU v6e chips Available zones
All v6e configurations varies by slice size europe-west4-a

Asia Pacific

TPU type (v2) TPU v2 cores Available zones
v2-8 8 asia-east1-c
TPU type (v5e) TPU v5e chips Available zones
All v5litepod configurations varies by slice size asia-southeast1-b
TPU type (v6e) TPU v6e chips Available zones
All v6e configurations varies by slice size asia-northeast1-b

TPU types with higher numbers of chips or cores are available only in limited quantities. TPU types with lower chip or core counts are more likely to be available.

Calculating price and performance tradeoffs

To decide which TPU type you want to use, you can do experiments using a Cloud TPU tutorial to train a model that is similar to your application.

Run the tutorial for 5 - 10% of the number of steps you will use to run the full training on a v2-8, or a v3-8 TPU type. The result tells you how long it takes to run that number of steps for that model on each TPU type.

Because performance on TPU types scales linearly, if you know how long it takes to run a task on a v2-8 or v3-8 TPU type, you can estimate how much you can reduce task time by running your model on a larger TPU type with more chips or cores.

For example, if a v2-8 TPU type takes 60 minutes to 10,000 steps, a v2-32 node should take approximately 15 minutes to perform the same task.

When you know the approximate training time for your model on a few different TPU types, you can weigh the VM/TPU cost against training time to help you decide your best price and performance tradeoff.

To determine the difference in cost between the different TPU types for Cloud TPU and the associated Compute Engine VM, see the TPU pricing page.

Specifying the TPU type

Regardless of which framework you are using, you specify a v2 or v3 TPU type with the accelerator-type parameter when you launch a TPU. For a TPU v4 or later, you can specify the type and size using either AcceleratorType or AcceleratorConfig. For more information, see TPU versions. Example commands are shown in Managing TPUs.

What's next

  • To see pricing for TPUs in each region, see the Pricing page.
  • Learn more about TPU architecture in the System Architecture page.
  • See When to use TPUs to learn about the types of models that are well suited to Cloud TPU.