Google Cloud Platform
Compute Engine

Regions and Zones

Certain Compute Engine resources live in regions or zones. An instance and persistent disk always live in a zone, while resources like static external IP addresses live in regions. You choose which region or zone to create your resources in, giving you control over where your data is stored and used. For example, when creating an instance or disk, you are prompted to select a zone where that resource should serve traffic from.

Resources that are specific to a zone or a region can only be used by other resources in the same zone or region. For example, disks and instances are both zonal resources. To attach a disk to an instance, both resources must be in the same zone. Similarly, if you want to assign a static IP address to an instance, the instance must be in the same region as the static IP.

Google Cloud Platform resources are hosted in multiple locations world-wide and these locations are composed of regions and zones within those regions. Putting resources in different zones in a region provides isolation for many types of infrastructure, hardware, and software failures. Putting resources in different regions provides an even higher degree of failure independence.

The following diagram provides some examples of how regions and zones relate to each other. Notice that each region is independent of other regions and each zone is isolated from other zones in the same region.

Zones diagram describing zones available in each region


Identifying regions and zones

Each region in Compute Engine contains any number of zones. To determine what zones belong to what region, see the fully qualified name of the zone. Each zone name contains two parts that describe each zone in detail. The first part of the zone name is the region and the second part of the name describes the zone in the region:

  • Region

    Regions are collections of zones. Zones have high-bandwidth, low-latency network connections to other zones in the same region. In order to deploy fault-tolerant applications that have high availability, Google recommends deploying applications across multiple zones in a region. This helps protect against unexpected failures of components, up to and including a single zone.

    Choose a region that makes sense for your scenario. For example, if you only have customers in the US, or if you have specific needs that require your data to live in the US, it makes sense to store your resources in a zone in the us-central1 region.

  • Zone

    A zone is an isolated location within a region. The fully-qualified name for a zone is made up of <region>-<zone>. For example, the fully-qualified name for zone a in region us-central1 is us-central1-a.

    Depending on how widely you want to distribute your resources, you may choose to create instances across multiple zones in multiple regions.

Available regions & zones

The following is a list of available regions and zones.

Region Location Available zones Features
Eastern US Berkeley County, South Carolina us-east1-b
Central US Council Bluffs, Iowa us-central1-a
Western Europe St. Ghislain, Belgium europe-west1-b
East Asia Changhua County, Taiwan asia-east1-a

Each zone supports Ivy Bridge, Sandy Bridge, or Haswell processors. When you create an instance in the zone, your instance will use the processor supported in that zone. For example, if you create an instance in the us-central1-a zone, your instance will use a Sandy Bridge processor.

To view a list of available zones, you can always run:

$ gcloud compute zones list

To view a list of available regions using gcloud compute, use the regions list command. The command lists all available regions and provides information such as quotas and the status of the region itself. For example:

$ gcloud compute regions list
asia-east1      0.00/24.00      0/10240      0/23      0/7           UP
europe-west1    0.00/24.00      0/10240      0/23      0/7           UP
us-central1     0.00/24.00      0/10240      0/23      0/7           UP

To get information about a single region, use the gcloud compute regions describe command:

$ gcloud compute regions describe us-central1
creationTimestamp: '2013-09-06T17:54:12.193-07:00'
description: us-central1
id: '5778272079688511892'
kind: compute#region
name: us-central1
- limit: 24.0
  metric: CPUS
  usage: 5.0
- limit: 5120.0
  metric: DISKS_TOTAL_GB
  usage: 650.0
- limit: 7.0
  usage: 4.0
- limit: 23.0
  usage: 5.0
- limit: 1024.0
  metric: SSD_TOTAL_GB
  usage: 0.0
status: UP

Transparent maintenance

Google regularly maintains its infrastructure by patching systems with the latest software, performing routine tests and preventative maintenance, and generally ensuring that Google infrastructure is as fast and efficient as Google knows how to make it.

By default, all instances are configured so that these maintenance events are transparent to your applications and workloads. Google uses a combination of datacenter innovations, operational best practices, and live migration technology to move running virtual machine instances out of the way of maintenance that is being performed. Your instance continues to run within the same zone with no action on your part.

By default, all virtual machines are set to live migrate, but you can also set your virtual machines to terminate and reboot. The two options differ in the following ways:

  • Live migrate

    Compute Engine automatically migrates your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but it is expected most applications and workloads will not notice.

  • Terminate and reboot

    Compute Engine automatically signals your instance to shut down, waits a short time for it to shut down cleanly, and then restarts it away from the maintenance event.

For more information on how to set the options above for your instances, see Setting Instance Scheduling Options.

Zone deprecation

Compute Engine will sometimes decommission an existing zone for a ground-up infrastructure refresh (power, cooling, network fabric, servers, etc.). Infrastructure refreshes are rare and zones typically run for three to five years between refreshes. We are also working to make these refreshes non-disruptive to our users.

On occasion when an infrastructure refresh is necessary, Compute Engine will deprecate the zone and notify users well in advance of when it will go offline so you have ample time to move your virtual machine instances and workloads. For instructions on how to move an instance between zones, see Moving an instance between zones.


Certain resources, such as static IPs, images, firewall rules, and networks, have defined project-wide quota limits and per-region quota limits. When you create these resources, it counts towards your total project-wide quota or your per-region quota, if applicable. If any of the affected quota limits are exceeded, you won't be able to add more resources of the same type in that project or region.

To see a comprehensive list of quotas that apply to your project, visit the Quotas page in the Google Developers Console.

For example, if your global target pools quota is 50 and you create 25 rules in example-region and 25 pools in example-region2, you reach your project-wide quota and won't be able to create more target pools in any region within your project until you free up space. Similarly, if you have a per-region quota of 7 reserved IP addresses, you can only reserve up to 7 IP addresses in a single region. After you hit that limit, you will either need to reserve IP addresses in a new region or release some IP addresses.


When selecting zones, here are some things to keep in mind:

  • Communication within and across regions will incur different costs.

    Generally, communication within regions will always be cheaper and faster than communication across different regions.

  • Design important systems with redundancy across multiple zones.

    At some point in time, it is possible that your instances might experience an unexpected failure. To mitigate the effects of these possible events, you should duplicate important systems in multiple zones.

    For example, if you host virtual machine instances in zones europe-west1-b and europe-west1-c, if europe-west1-b fails unexpectedly, your instances in zone europe-west1-c will still be available. However, if you host all your instances in europe-west1-b, you will not be able to access any of your instances if europe-west1-b goes offline. For more tips on how to design systems for availability, see Designing Robust Systems.