Regional, dual-region, and multi-region configurations

This page describes the different types of instance configurations available in Spanner, and the differences and trade-offs between them.

Instance configurations

A Spanner instance configuration defines the geographic placement and replication of the databases in that instance. When you create an instance, you must configure it as either regional, dual-region, or multi-region. You make this choice by selecting an instance configuration, which determines where your data is stored for that instance:

Regions are independent geographic areas that consist of zones. Zones and regions are logical abstractions of underlying physical resources. A region consists of three or more zones housed in three or more physical data centers. The regions Mexico, Osaka, and Montreal have three zones housed in one or two physical data centers. These regions are in the process of expanding to at least three physical data centers. When you architect your solutions in Google Cloud, consider the guidance in Cloud locations, Google Cloud Platform SLAs, and the appropriate Google Cloud product documentation.

The instance configurations with fixed regions and replication topologies are referred to as base instance configurations. You can create custom instance configurations and add additional optional read-only replicas (available in the Enterprise edition and Enterprise Plus edition). You can't change the replication topology of base instance configurations. For more information, see Read-only replicas.

You can move your instance from any instance configuration to any other regional, dual-region or multi-region instance configuration (for example, from us-central1 to nam3). You can also create a new custom instance configuration with additional replicas, then move your instance to the new custom instance configuration. For example, if your instance is in us-central1 and you want to add a read-only replica us-west1, then you need to create a new custom instance configuration with us-central1 as the base configuration and add us-west1 as a read-only replica. Then, move your instance to this new custom instance configuration.

Regional configurations

Google Cloud services are available in locations across North America, South America, Europe, Asia, and Australia. If your users and services are located within a single region, choose a regional instance configuration for the lowest-latency reads and writes.

For any base regional configuration, Spanner maintains three read-write replicas, each within a different Google Cloud zone in that region. Each read-write replica contains a full copy of your operational database that is able to serve read-write and read-only requests. Spanner uses replicas in different zones so that if a single-zone failure occurs, your database remains available.

Available configurations

Spanner offers the following base regional instance configurations:

Base Configuration Name Region Description Optional Region
Americas
northamerica-northeast1 Montréal leaf icon Low CO2
northamerica-northeast2 Toronto leaf icon Low CO2
northamerica-south1 Querétaro
southamerica-east1 São Paulo leaf icon Low CO2
southamerica-west1 Santiago leaf icon Low CO2
us-central1 Iowa leaf icon Low CO2 Read-only: asia-northeast1 1-OR
asia-south1 1-OR
europe-west2 1-OR
europe-west9 1-OR
us-west3 1-OR
us-east1 South Carolina Read-only: us-central1 1-OR
us-west1 1-OR
us-east4 Northern Virginia
us-east5 Columbus
us-south1 Dallas leaf icon Low CO2
us-west1 Oregon leaf icon Low CO2
us-west2 Los Angeles
us-west3 Salt Lake City
us-west4 Las Vegas
Europe
europe-central2 Warsaw
europe-north1 Finland leaf icon Low CO2
europe-southwest1 Madrid leaf icon Low CO2
europe-west1 Belgium leaf icon Low CO2 Read-only: us-central1 1-OR
us-west1 1-OR
europe-west2 London leaf icon Low CO2
europe-west3 Frankfurt leaf icon Low CO2
europe-west4 Netherlands leaf icon Low CO2
europe-west6 Zürich leaf icon Low CO2
europe-west8 Milan
europe-west9 Paris leaf icon Low CO2
europe-west10 Berlin leaf icon Low CO2
europe-west12 Turin
Asia Pacific
asia-east1 Taiwan
asia-east2 Hong Kong
asia-northeast1 Tokyo
asia-northeast2 Osaka
asia-northeast3 Seoul
asia-south1 Mumbai
asia-south2 Delhi
asia-southeast1 Singapore
asia-southeast2 Jakarta
australia-southeast1 Sydney
australia-southeast2 Melbourne
Middle East
me-central1 Doha
me-central2 Dammam
me-west1 Tel Aviv
Africa
africa-south1 Johannesburg

Replication

Base regional configurations contain three read-write replicas. Every Spanner mutation requires a write quorum that's composed of a majority of voting replicas. Write quorums are formed from two out of the three replicas in regional configurations. For more information about leader regions and voting replicas, see Replication.

You can create a custom regional instance configuration and add optional read-only replicas. Read-only replicas can help scale reads and support low latency stale reads. These read-only replicas don't take part in the write quorums, and they don't affect the Spanner >= 99.99% SLA for regional instances. You can add locations listed under the Optional Region column as optional read-only replica(s). If you don't see your chosen read-only replica location, you can request a new optional read-only replica region. For more information, see Read-only replicas.

Performance best practices for regional configurations

For optimal performance, follow these best practices:

  • Design a schema that prevents hotspots and other performance issues.
  • Place critical compute resources within the same region as your Spanner instance.
  • Provision enough compute capacity to keep high priority total CPU utilization under 65%.
  • For information about the amount of throughput per Spanner node, see Performance for regional configurations.

Dual-region configurations

Dual-region configurations let you replicate the database's data in multiple zones across two regions in a single country, as defined by the instance configuration.

Dual-region configurations do the following:

  • Serve reads from two regions in a single country.
  • Provide higher availability and SLAs than regional configurations.
  • Meet data residency requirements.

Spanner offers dual-region configurations in Australia, Germany, India, and Japan.

For information about the amount of throughput per Spanner node, see Performance for dual-region configurations.

Available configurations

Spanner offers the following base dual-region instance configurations:

Base Configuration Name Resource Location Regions
dual-region-australia1 au (Australia) Sydney: australia-southeast1 L,2RW+1W
Melbourne: australia-southeast2 2RW+1W
dual-region-germany1 de (Germany) Berlin: europe-west10 L,2RW+1W
Frankfurt: europe-west3 2RW+1W
dual-region-india1 in (India) Mumbai: asia-south1 L,2RW+1W
Delhi: asia-south2 2RW+1W
dual-region-japan1 jp (Japan) Tokyo: asia-northeast1 L,2RW+1W
Osaka: asia-northeast2 2RW+1W

Benefits

Dual-region instances offer these primary benefits:

  • 99.999% availability: across two regions in the same country, which is greater than the 99.99% availability that Spanner regional configurations provide.

  • Data distribution: automatically replicates your data between the two regions with strong consistency guarantees.

  • Data residency requirements: Meets data residency requirements in the countries listed under dual-region Available configurations.

Replication

A dual-region contains six replicas, three in each region. One of the regions is designated as the default leader region (listed in the previous table). You can change the leader region of a database. In each region, there are two read-write replicas and one witness replica. When both regions are healthy and running in a dual-region configuration, the quorum is established across all six replicas. A minimum of two replicas in each region is required to form a quorum and commit a transaction.

Failover and failback

After you create a dual-region configuration, you can view the Dual-region quorum availability metric on the System insights dashboard. This metric is only available for dual-region configurations. It shows the health of three quorums:

  • The dual-region quorum, which uses the following codenames:
    • au for Australia
    • de for Germany
    • in for India
    • asia1 for Japan
  • The single region quorum in each region (for example, asia-south1 and asia-south2)

The metric has a Quorum availability drop-down that shows which regions are in healthy or disruption mode.

The Dual-region quorum availability metric helps you make self-managed when-to-failover decisions in case of regional failures. Self-managed failover usually completes within one minute. To failover and failback manually, see Change dual-region quorum. Spanner also supports automatic, Google-managed failovers, which might take up to 45 minutes from the time the failure is first detected.

Consider the following when making failover and failback decisions:

  • If all three quorums are healthy, then no action is needed.

  • If one of the regions shows disruption, then there is probably a regional service disruption. This might cause the databases running in your dual-region quorum to experience less availability. Writes might also fail because a quorum can't be established and transactions eventually time out. Using the System insights dashboard, observe error rates and latency in your database. If there are increased error rates or latency, then we recommend that you failover, which means changing the dual-region quorum from dual-region to the region that is still healthy. After the disrupted region is healthy again, you must failback, changing the dual-region quorum from single region to dual-region. Google automatically performs failover and failback when it detects a regional outage. You can also manually failover if you detect a disruption. However, you must remember to manually failback if you performed a manual failover.

  • If the dual-region quorum shows disruption even though both single regions are healthy, then there is a network partitioning issue. The two regions are no longer able to communicate with each other so they each show healthy even though the overall system is not. In this scenario, we recommend that you failover to the default leader region. After the network partition issue is resolved and the dual-region quorum returns to healthy, you must manually failback.

Dual-region provides zero Recovery Point Objective (RPO) because there is no data loss during a regional outage or when a network partition issue arises.

To check the mode (single or dual) of your dual-region quorum, see Check dual-region quorum.

Failover and failback best practices

Failover and failback best practices include:

  • Don't failover to a single region if no region failures or disruptions occur. Failing over to a single region increases the possibility of overall system unavailability if that single region fails.
  • Be mindful when selecting the region to failover. Choosing a wrong region to failover results in database unavailability, which is unrecoverable before the region is back online. To verify, you can use a bash script to check the health of your single region, before performing the failover.
  • If the unhealthy region is the default leader region, change the default leader region to the failover region after performing the failover. After confirming both regions are healthy again, perform failback, then change the leader region back to your original leader region.
  • Remember to manually failback if you performed a manual failover.

Limitations

You can't create a custom dual-region instance configuration. You can't add read-only replicas to a dual-region instance configuration.

Multi-region configurations

Spanner regional configurations replicate data between multiple zones within a single region. However, a regional configuration might not be optimal if:

  • Your application often needs to read data from multiple geographic locations (for example, to serve data to users in both North America and Asia).
  • Your writes originate from a different location than your reads (for example, if you have large write workloads in North America and large read workloads in Europe).

Multi-region configurations can:

  • Serve writes from multiple regions.
  • Maintain availability in the case of regional failures.
  • Provide higher availability and SLAs than regional configurations.

Multi-region configurations let you replicate the database's data in multiple zones across multiple regions, as defined by the instance configuration. These additional replicas let you read data with lower latency from multiple locations close to or within the regions in the configuration.

There are trade-offs though, because in a multi-region configuration, the quorum (read-write) replicas are spread across more than one region. You might notice additional network latency when these replicas communicate with each other to form a write quorum. Reads don't require a quorum. The result is that your application achieves faster reads in more places at the cost of a small increase in write latency. For more information, see The role of replicas in writes and reads.

Available configurations

Spanner offers the following base multi-region instance configurations:

One continent

Base Configuration Name Resource Location Read-Write Regions Read-Only Regions Witness Region Optional Region
asia1 asia1 Tokyo: asia-northeast1 L,2R
Osaka: asia-northeast2 2R
None Seoul: asia-northeast3 Read-only: us-west1 1-OR
us-east5 1-OR
asia2 A asia2 Mumbai: asia-south1 L,2R
Delhi: asia-south2 2R
Singapore: asia-southeast1 1R
None None
eur3 eur3 Belgium: europe-west1 L,2R
Netherlands: europe-west4 2R
None Finland: europe-north1 Read-only: us-central1 1-OR
us-east4 1-OR
eur5 eur5 London: europe-west2 L,2R
Belgium: europe-west1 2R
None Netherlands: europe-west4 Read-only: us-central1 1-OR
us-east1 1-OR
eur6 eur6 Netherlands: europe-west4 L,2R
Frankfurt: europe-west3 2R
None Zurich: europe-west6 Read-only: us-east1 2-OR
nam3 nam3 Northern Virginia: us-east4 L,2R
South Carolina: us-east1 2R
None Iowa: us-central1 Read-only: us-west2 1-OR
asia-southeast1 1-OR
asia-southeast2 1-OR
europe-west1 1-OR
europe-west2 1-OR
nam6 nam6 Iowa: us-central1 L,2R
South Carolina: us-east1 2R
Oregon: us-west1 1R
Los Angeles: us-west2 1R
Oklahoma: us-central2
nam7 nam7 Iowa: us-central1 L,2R
Northern Virginia: us-east4 2R
None Oklahoma: us-central2 Read-only: us-east1 2-OR
us-south1 1-OR
europe-west1 2-OR
nam8 nam8 Los Angeles: us-west2 L,2R
Oregon: us-west1 2R
None Salt Lake City: us-west3 Read-only: asia-southeast1 2-OR
europe-west2 2-OR
nam9 nam9 Northern Virginia: us-east4 L,2R
Iowa: us-central1 2R
Oregon: us-west1 2R South Carolina: us-east1
nam10 nam10 Iowa: us-central1 L,2R
Salt Lake City: us-west3 2R
None Oklahoma: us-central2
nam11 nam11 Iowa: us-central1 L,2R
South Carolina: us-east1 2R
None Oklahoma: us-central2 Read-only: us-west1 1-OR
nam12 nam12 Iowa: us-central1 L,2R
Northern Virginia: us-east4 2R
Oregon: us-west1 2R Oklahoma: us-central2
nam13 nam13 Oklahoma: us-central2 L,2R
Iowa: us-central1 2R
None Salt Lake City: us-west3
nam14 nam14 Northern Virginia: us-east4 L,2R
Montréal: northamerica-northeast1 2R
None South Carolina: us-east1
nam15 nam15 Dallas: us-south1 L,2R
Northern Virginia: us-east4 2R
None Iowa: us-central1
nam16 us (United States) Iowa: us-central1 L,2R
Northern Virginia: us-east4 2R
None Columbus: us-east5 Read-only: us-west2 2-OR

Three continents

Base Configuration Name Resource Location Read-Write Regions Read-Only Regions Witness Region Optional Region
nam-eur-asia1 nam-eur-asia1 Iowa: us-central1 L,2R
Oklahoma: us-central2 2R
Belgium: europe-west1 2R
Taiwan: asia-east1 2R
South Carolina: us-east1 Read-only: us-west2 1-OR
nam-eur-asia3 nam-eur-asia3 Iowa: us-central1 L,2R
South Carolina: us-east1 2R
Belgium: europe-west1 1R
Netherlands: europe-west4 1R
Taiwan: asia-east1 2R
Oklahoma: us-central2
  • L: default leader region. For more information, see Modify the leader region of a database.

  • 1R: one replica in the region.

  • 2R: two replicas in the region.

  • 2RW+1W: two read-write replicas and one witness replica in the region.

  • 1-OR: one optional replica. You can create a custom regional instance configuration and add one optional read-only replica. For more information, see Create a custom instance configuration.

  • 2-OR: up to two optional replicas. You can create a custom regional instance configuration and add one or two optional read-only replicas. We recommend adding two (where possible) to help maintain low read latency. For more information, see Create a custom instance configuration.

  • A: This instance configuration is restricted with an allow-list. To get access, reach out to your Technical Account Manager.

The resource location for a multi-region instance configuration determines the disaster recovery zone guarantee for the configuration. It defines where data is stored at-rest.

Benefits

Multi-region instances offer these primary benefits:

  • 99.999% availability, which is greater than the 99.99% availability that Spanner regional configurations provide.

  • Data distribution: Spanner automatically replicates your data between regions with strong consistency guarantees. This allows your data to be stored where it's used, which can reduce latency and improve the user experience.

  • External consistency: Even though Spanner replicates across geographically distant locations, you can still use Spanner as if it were a database running on a single machine. Transactions are guaranteed to be serializable, and the order of transactions within the database is the same as the order in which clients observe the transactions to have been committed. External consistency is a stronger guarantee than "strong consistency," which is offered by some other products. Read more about this property in TrueTime and external consistency.

Replication

Each base multi-region configuration contains two regions that are designated as read-write regions, each of which contains two read-write replicas. One of these read-write regions is designated as the default leader region, which means that it contains your database's leader replicas. Spanner also places a witness replica in a third region called a witness region.

Each time a client issues a mutation to your database, a write quorum forms, consisting of one of the replicas from the default leader region and any two of the additional four voting replicas. (The quorum could be formed by replicas from two or three of the regions that make up your configuration, depending on which other replicas participate in the vote.) In addition to these five voting replicas, some base multi-region configurations contain read-only replicas for serving low-latency reads. The regions that contain read-only replicas are called read-only regions.

In general, the voting regions in a multi-region configuration are placed geographically close—less than a thousand miles apart—to form a low-latency quorum that enables fast writes (learn more). However, the regions are still far enough apart—typically, at least a few hundred miles—to avoid coordinated failures. In addition, if your client application is in a non-leader region, Spanner uses leader-aware routing to route read-write transactions dynamically to reduce latency in your database. For more information, see Leader-aware routing.

You can create a custom multi-region instance configuration with optional read-only replicas. Any custom read-only replicas you create can't be included in write quorums. You can add locations listed under the Optional Region column as optional read-only replica(s). If you don't see your chosen read-only replica location, you can request a new optional read-only replica region. For more information, see Read-only replicas.

Performance best practices for multi-region configurations

For optimal performance, follow these best practices:

  • Design a schema that prevents hotspots and other performance issues.
  • For optimal write latency, place compute resources for write-heavy workloads within or close to the default leader region.
  • For optimal read performance outside of the default leader region, use staleness of at least 15 seconds.
  • To avoid single-region dependency for your workloads, place critical compute resources in at least two regions. A good option is to place them next to the two different read-write regions so that any single region outage won't impact all of your application.
  • Provision enough compute capacity to keep high priority total CPU utilization under 45% in each region.
  • For information about the amount of throughput per Spanner node, see Performance for multi-region configurations.

Move an instance

You can move your Spanner instance from any instance configuration to any other instance configuration, including between regional and multi-region configurations. Moving your instance does not cause downtime, and Spanner continues to provide the usual transaction guarantees, including strong consistency, during the move.

To learn more about Spanner instance move, see Move an instance.

Configure the default leader region

To change the location of your database's default leader region to be closer to connecting clients to reduce application latency, you can change the leader region for any Spanner instance that uses a dual-region or multi-region configuration. For instructions on changing the location of the leader region, see Change the leader region of a database. The only regions eligible to become the default leader region for your database are the read-write regions in your dual-region or multi-region configuration.

The leader region is responsible for handling all database writes, therefore if most of your traffic comes from one geographic region, you can move it to that region to reduce latency. Updating the default leader region is cheap and does not involve any data moves. The new value takes a few minutes to take effect.

Changing the default leader region is a schema change, which uses a long-running operation. If needed, you can Get the status of the long-running operation.

Trade-offs: regional versus dual-region versus multi-region configurations

Configuration Availability Latency Cost Data Locality
Regional 99.99% Lower write latencies within region. Lower cost; see pricing. Enables geographic data governance.
Dual-region 99.999% Lower read latencies from two geographic regions; a small increase in write latency. Higher cost; see pricing. Distributes data across two regions in a single country.
Multi-region 99.999% Lower read latencies from multiple geographic regions; a small increase in write latency. Higher cost; see pricing. Distributes data across multiple regions within the configuration.

What's next