TPU regions and zones
Overview
The main differences between TPU types are price, performance, memory capacity, and zonal availability.
Google Cloud Platform uses regions, subdivided into zones, to define the
geographic location of physical compute resources. For example, the
us-central1
region denotes a region near the geographic center of
the United States. When you create a TPU node, you specify the zone in which you
want to create it. See the Compute Engine
Global, regional, and zonal resources
document for more information about regional and zonal resources.
You can create v2, v3 or v4 TPU configurations in the zones shown in the following table.
US
TPU type (v2) | TPU v2 cores | Region/Zone |
---|---|---|
v2-8 | 8 |
us-central1-b us-central1-c us-central1-f
|
v2-32 | 32 |
us-central1-a
|
v2-128 | 128 |
us-central1-a
|
v2-256 | 256 |
us-central1-a
|
v2-512 | 512 |
us-central1-a
|
TPU type (v3) | TPU v3 cores | Available zones |
v3-8 | 8 |
us-central1-a us-central1-b us-central1-f
|
TPU type (v4) | TPU v4 chips | Available zones |
All v4 configurations | varies by slice size |
us-central2-b |
Europe
TPU type (v2) | TPU v2 cores | Region/Zone |
---|---|---|
v2-8 | 8 |
europe-west4-a
|
v2-32 | 32 |
europe-west4-a
|
v2-128 | 128 |
europe-west4-a
|
v2-256 | 256 |
europe-west4-a
|
v2-512 | 512 |
europe-west4-a
|
TPU type (v3) | TPU v3 cores | Available zones |
v3-8 | 8 |
europe-west4-a
|
v3-32 | 32 |
europe-west4-a
|
v3-64 | 64 |
europe-west4-a
|
v3-128 | 128 |
europe-west4-a
|
v3-256 | 256 |
europe-west4-a
|
v3-512 | 512 |
europe-west4-a
|
v3-1024 | 1024 |
europe-west4-a
|
v3-2048 | 2048 |
europe-west4-a
|
Asia Pacific
TPU type (v2) | TPU v2 cores | Region/Zone |
---|---|---|
v2-8 | 8 |
asia-east1-c
|
TPU types with higher numbers of chips or cores are available only in limited quantities. TPU types with lower chip or core counts are more likely to be available.
Calculating price and performance tradeoffs
To decide which TPU type you want to use, you can do experiments using a Cloud TPU tutorial to train a model that is similar to your application.
Run the tutorial for 5 - 10% of the number of steps you will use to run the full
training on a v2-8
, or a v3-8
TPU type. The result
tells you how long it takes to run that number of steps for that model on each
TPU type.
Because performance on TPU types scales linearly, if you know how long it takes
to run a task on a v2-8
or v3-8
TPU type, you can
estimate how much you can reduce task time by running your model on a larger TPU
type with more chips or cores.
For example, if a v2-8
TPU type takes 60 minutes to 10,000 steps, a
v2-32
node should take approximately 15 minutes to perform the same
task.
When you know the approximate training time for your model on a few different TPU types, you can weigh the VM/TPU cost against training time to help you decide your best price/performance tradeoff.
To determine the difference in cost between the different TPU types for Cloud TPU and the associated Compute Engine VM, see the TPU pricing page.
Specifying the TPU type
Regardless of which framework you are using, TensorFlow, PyTorch, or JAX, you
specify a v2
or v3
TPU type with the
accelerator-type
parameter when you
launch a TPU. The TPU type command
depends on whether you are using TPU VMs or TPU Nodes. Example commands are
shown in Managing TPUs.
What's next
- To see pricing for TPUs in each region, see the Pricing page.
- Learn more about TPU architecture in the System Architecture page.
- See When to use TPUs to learn about the types of models that are well suited to Cloud TPU.