Regional and multi-region configurations

This page describes instance configurations and the two types of instance configurations Cloud Spanner offers: regional configurations and multi-region configurations. It also describes the differences and tradeoffs between regional and multi-region configurations.

Instance configurations

An instance configuration defines the geographic placement and replication of the databases in that instance. When you create an instance, you must configure it as either regional (that is, all the resources are contained within a single Google Cloud region) or multi-region (that is, the resources span more than one region). You make this choice by selecting an instance configuration, which determines where your data is stored for that instance.

You can change the instance configuration by following the instructions at Steps to move an instance. For general information about moving an instance, see Moving an instance to a different configuration.

Regional configurations

Google Cloud services are available in locations across North America, South America, Europe, Asia, and Australia. If your users and services are located within a single region, choose a regional instance configuration for the lowest-latency reads and writes.

Available configurations

Cloud Spanner offers the following regional instance configurations:

Region Name Region Description
Americas
northamerica-northeast1 Montréal leaf icon Low CO2
northamerica-northeast2 Toronto
southamerica-east1 São Paulo leaf icon Low CO2
us-central1 Iowa leaf icon Low CO2
us-east1 South Carolina
us-east4 Northern Virginia
us-west1 Oregon leaf icon Low CO2
us-west2 Los Angeles
us-west3 Salt Lake City
us-west4 Las Vegas
Europe
europe-central2 Warsaw
europe-north1 Finland leaf icon Low CO2
europe-west1 Belgium leaf icon Low CO2
europe-west2 London
europe-west3 Frankfurt
europe-west4 Netherlands
europe-west6 Zürich leaf icon Low CO2
Asia Pacific
asia-east1 Taiwan
asia-east2 Hong Kong
asia-northeast1 Tokyo
asia-northeast2 Osaka
asia-northeast3 Seoul
asia-south1 Mumbai
asia-south2 Delhi
asia-southeast1 Singapore
asia-southeast2 Jakarta
australia-southeast1 Sydney
australia-southeast2 Melbourne

For any regional configuration, Cloud Spanner maintains 3 read-write replicas, each within a different Google Cloud zone in that region. Each read-write replica contains a full copy of your operational database that is able to serve read-write and read-only requests. Cloud Spanner uses replicas in different zones so that if a single-zone failure occurs, your database remains available.

Replication

Regional configurations contain exactly 3 read-write replicas. As described in Replication, every Cloud Spanner mutation requires a write quorum that's composed of a majority of voting replicas. Write quorums are formed from two out of the three replicas in regional configurations.

Best practices

For optimal performance, follow these best practices:

  • Design a schema that prevents hotspots and other performance issues.
  • Place critical compute resources within the same region as your Cloud Spanner instance.
  • Provision enough compute capacity to keep high priority total CPU utilization under 65%.

Performance

When you follow the best practices described above, each 1000 processing units (1 node) of compute capacity can provide up to 10,000 queries per second (QPS) of reads or 2,000 QPS of writes (writing single rows at 1 KB of data per row).

Multi-region configurations

As described above, Cloud Spanner regional configurations replicate data between multiple zones within a single region. However, if your application often needs to read data from multiple geographic locations (for example, to serve data to users in both North America and Asia), or if your writes originate from a different location than your reads (for example, if you have large write workloads in North America and large read workloads in Europe), then a regional configuration might not be optimal.

Multi-region configurations allow you to replicate the database's data not just in multiple zones, but in multiple zones across multiple regions, as defined by the instance configuration. These additional replicas enable you to read data with low latency from multiple locations close to or within the regions in the configuration. There are tradeoffs though, because in a multi-region configuration, the quorum (read-write) replicas are spread across more than one region. Hence, they can incur additional network latency when these replicas communicate with each other to vote on writes. In other words, multi-region configurations enable your application to achieve faster reads in more places at the cost of a small increase in write latency.

Available configurations

One Continent

Name Location Read-Write Regions Read-Only Regions Witness Region
asia1 Asia Tokyo: asia-northeast1 L,2R
Osaka: asia-northeast2 2R
None Seoul: asia-northeast3
eur3 Europe Belgium: europe-west1 L,2R
Netherlands: europe-west4 2R
None Finland: europe-north1
eur5 Europe London: europe-west2 L,2R
Belgium: europe-west1 2R
None Netherlands: europe-west4
eur6 Europe Netherlands: europe-west4 L,2R
Frankfurt: europe-west3 2R
None Zurich: europe-west6
nam3 North America Northern Virginia: us-east4 L,2R
South Carolina: us-east1 2R
None Iowa: us-central1
nam6 North America Iowa: us-central1 L,2R
South Carolina: us-east1 2R
Oregon: us-west1 1R
Los Angeles: us-west2 1R
Oklahoma: us-central2
nam7 North America Iowa: us-central1 L,2R
Northern Virginia: us-east4 2R
None Oklahoma: us-central2
nam8 North America Los Angeles: us-west2 L,2R
Oregon: us-west1 2R
None Salt Lake City: us-west3
nam9 North America Northern Virginia: us-east4 L,2R
Iowa: us-central1 2R
Oregon: us-west1 2R South Carolina: us-east1
nam10 North America Iowa: us-central1 L,2R
Salt Lake City: us-west3 2R
None Oklahoma: us-central2
nam11 North America Iowa: us-central1 L,2R
South Carolina: us-east1 2R
None Oklahoma: us-central2
nam12 North America Iowa: us-central1 L,2R
Northern Virginia: us-east4 2R
Oregon: us-west1 2R Oklahoma: us-central2
  • L: default leader region

  • 1R: 1 replica in the region

  • 2R: 2 replicas in the region

Three Continent

Name Locations Read-Write Regions Read-Only Regions Witness Region
nam-eur-asia1 North America
Europe
Asia
Iowa: us-central1 L,2R
Oklahoma: us-central2 2R
Belgium: europe-west1 2R
Taiwan: asia-east1 2R
South Carolina: us-east1

Benefits

Multi-region instances offer these primary benefits:

  • 99.999% availability, which is greater than the 99.99% availability that Cloud Spanner regional configurations provide.

  • Data distribution: Cloud Spanner automatically replicates your data between regions with strong consistency guarantees. This allows your data to be stored where it's used, which can reduce latency and improve the user experience.

  • External consistency: Even though Cloud Spanner replicates across geographically distant locations, you can still use Cloud Spanner as if it were a database running on a single machine. Transactions are guaranteed to be serializable, and the order of transactions within the database is the same as the order in which clients observe the transactions to have been committed. External consistency is a stronger guarantee than "strong consistency," which is offered by some other products. Read more about this property in TrueTime and external consistency.

Replication

Each multi-region configuration contains two regions that are designated as read-write regions, each of which contains two read-write replicas. One of these read-write regions is designated as the default leader region, which means that it contains your database's leader replicas. Cloud Spanner also places a witness replica in a third region called a witness region.

Each time a client issues a mutation to your database, a write quorum forms, consisting of one of the replicas from the default leader region and any two of the additional four voting replicas. (The quorum could be formed by replicas from two or three of the regions that make up your configuration, depending on which other replicas participate in the vote.) In addition to these 5 voting replicas, the configuration can also contain read-only replicas for serving low-latency reads. The regions that contain read-only replicas are called read-only regions.

In general, the voting regions in a multi-region configuration are placed geographically close—less than a thousand miles apart—to form a low-latency quorum that enables fast writes (learn more). However, the regions are still far enough apart—typically, at least a few hundred miles—to avoid coordinated failures.

The next sections describe each of these region types in more detail and provide guidance for how to place your write and read workloads accordingly.

Region types

Read-write regions

As described above, each multi-region configuration contains two read-write regions, each of which contains two read-write replicas. One of these read-write regions is designated the default leader region. A leader will be chosen amongst the replicas in the default leader region for each split. In the event of a leader replica failure, the other replica in the default leader region automatically assumes leadership. In fact, leaders run health checks on themselves and can preemptively give up leadership if they detect they are unhealthy. Under normal conditions when replicas in the default leader region are available, the default leader region contains the leaders and therefore is where writes are first processed.

The second read-write region contains the additional replicas that are eligible to be leaders. In the unlikely event of the loss of all replicas in the default leader region, new leader replicas are chosen from the second read-write region.

You can configure the leader region of a database by following the instructions at Changing the leader region of a DB. For general information about configuring the leader region, see Configuring the default leader region.

Read-only regions

Read-only regions contain read-only replicas, which can serve low-latency reads to clients that are outside of the read-write regions.

Witness regions

A witness region contains a witness replica, which is used to vote on writes. Witnesses become important in the rare event that the read-write regions become unavailable.

Best practices

For optimal performance, follow these best practices:

  • Design a schema that prevents hotspots and other performance issues.
  • For optimal write latency, place compute resources for write-heavy workloads within or close to the default leader region.
  • For optimal read performance outside of the default leader region, use staleness of at least 15 seconds.
  • To avoid single-region dependency for your workloads, place critical compute resources in at least two regions. A good option is to place them next to the two different read-write regions so that any single region outage will not impact all of your application.
  • Provision enough compute capacity to keep high priority total CPU utilization under 45% in each region.

Performance

Each Cloud Spanner configuration has slightly different performance characteristics based on the replication topology.

When you follow the best practices described above, each each 1000 processing units (1 node) of compute capacity can provide the following approximate performance:

Multi-Region Configuration Approximate Peak Read (QPS per region) Approximate Peak Writes (QPS total)
asia1 7,000 1,800
eur3 7,000 1,800
eur5 7,000 1,800
eur6 7,000 1,800
nam3 7,000 1,800
nam6 7,000 in us-central1 and us-east1
3,500 in us-west1 and us-west2 [1]
1,800
nam7 7,000 1,800
nam8 7,000 1,800
nam9 7,000 1,800
nam10 7,000 1,800
nam11 7,000 1,800
nam12 7,000 1,800
nam-eur-asia1 7,000 1,000
  • [1]: us-west1 and us-west2 provide only half of the QPS performance because they contain one replica per region instead of two.

Note that read guidance is given per region (because reads can be served from anywhere), while write guidance is for the entire configuration. Write guidance assumes that you're writing single rows at 1 KB of data per row.

Moving an instance to a different configuration

You can move your instance from any instance configuration to any other instance configuration, including between regional and multi-regional configurations.

Moving the instance does not cause downtime, and Cloud Spanner continues to provide the usual transaction guarantees during a move, including strong consistency.

Steps to move an instance

  1. Go to the Spanner Instances page in the Cloud Console.

    Go to the Instances page

  2. Click the name of the instance you want to move.

    The Cloud Console displays the instance's Overview page.

  3. To schedule moving to a new instance configuration, click contact Google and complete the Cloud Spanner Live Instance Migration Request (Preview) form.

    • You must have the spanner.instances.update permission in order request moving to a new instance configuration.
    • After the form is submitted, Google contacts you to schedule the instance configuration change.

Limitations to moving an instance

  • You can only move one instance in a project at a time.
  • Moving an instance does not change the instance's name, ID, or project ID.
  • You should not make changes to the instance during the migration. This includes changing the instance node count, changing database schemas, creating or dropping databases, or creating or deleting backups.
  • Cloud Spanner backups are specific to an instance configuration and are not included when moving an instance. After moving an instance to a new instance configuration, all backups in the old instance configuration are deleted.
  • You cannot currently change the instance configuration of an instance that has any CMEK-enabled databases.

Performance considerations when moving an instance

When an instance is being moved, the instance experiences higher read/write latencies and a higher transaction abort rate.

Once initiated, most migrations should be completed within twelve hours. Some migrations may take longer time depending on the size and other characteristics of the instance. Google will contact you with a migration time estimate specific to the instance to be migrated.

All databases in the instance are migrated at the same time. Since client applications may make several sequential calls to Cloud Spanner, the increase in application tail latency may be higher than the increase in Cloud Spanner latency.

After migrating an instance, the performance of the instance will vary depending on the details of the instance configuration. For example, multi-region configurations generally have higher write latency and lower read latency than regional configurations.

Configuring the default leader region

You may wish to change the location of your database's default leader region to be closer to connecting clients to reduce application latency.

You can change the leader region for any Cloud Spanner instance that uses a multi-region configuration. For instructions on changing the location of the leader region, see Changing the leader region of a DB. The only regions eligible to become the default leader region for your database are the read-write regions in your multi-region configuration.

The leader region is responsible for handling all database writes, therefore if most of your traffic comes from one geographic region, you may wish to move it to that region to reduce latency. Updating the default leader region is cheap and does not involve any data moves. The new value takes a few minutes to take effect.

Changing the default leader region is a schema change, which uses a long-running operation. If needed, you can Get the status of the long-running operation.

Tradeoffs: regional versus multi-region configurations

Configuration Availability Latency Cost Data Locality
Regional 99.99% Lower write latencies within region. Lower cost; see pricing. Enables geographic data governance.
Multi-region 99.999% Lower read latencies from multiple geographic regions. Higher cost; see pricing. Distributes data across multiple regions within the configuration.

What's next