This page describes instance configurations and the two types of instance configurations that Cloud Spanner offers: regional configurations and multi-region configurations. It also describes the differences and trade-offs between regional and multi-region configurations.
Instance configurations
An instance configuration defines the geographic placement and replication of the databases in that instance. When you create an instance, you must configure it as either regional (all the resources are contained within a single Google Cloud region) or multi-region (the resources span more than one region). You make this choice by selecting an instance configuration, which determines where your data is stored for that instance.
The instance configurations with fixed regions and replication topologies are referred to as base instance configurations. You can create custom instance configurations and add additional optional read-only replicas. Note that you cannot change the replication topology of base instance configurations. For more information, see Read-only replicas.
You can also move your instance from any instance configuration to any other regional or multi-region instance configuration.
Regional configurations
Google Cloud services are available in locations across North America, South America, Europe, Asia, and Australia. If your users and services are located within a single region, choose a regional instance configuration for the lowest-latency reads and writes.
For any base regional configuration, Spanner maintains three read-write replicas, each within a different Google Cloud zone in that region. Each read-write replica contains a full copy of your operational database that is able to serve read-write and read-only requests. Spanner uses replicas in different zones so that if a single-zone failure occurs, your database remains available.
Available configurations
Spanner offers the following base regional instance configurations:
Base Configuration Name | Region Description | Optional Region | |
---|---|---|---|
Americas | |||
northamerica-northeast1 |
Montréal
|
||
northamerica-northeast2 |
Toronto
|
||
southamerica-east1 |
São Paulo
|
||
southamerica-west1 |
Santiago | ||
us-central1 |
Iowa
|
Read-only: asia-northeast1 1R asia-south1 1R europe-west2 1R europe-west9 1R |
|
us-east1 |
South Carolina | Read-only: us-central1 1R us-west1 1R |
|
us-east4 |
Northern Virginia | ||
us-east5 |
Columbus | ||
us-south1 |
Dallas | ||
us-west1 |
Oregon
|
||
us-west2 |
Los Angeles | ||
us-west3 |
Salt Lake City | ||
us-west4 |
Las Vegas | ||
Europe | |||
europe-central2 |
Warsaw | ||
europe-north1 |
Finland
|
||
europe-southwest1 |
Madrid
|
||
europe-west1 |
Belgium
|
||
europe-west2 |
London | ||
europe-west3 |
Frankfurt | ||
europe-west4 |
Netherlands | ||
europe-west6 |
Zürich
|
||
europe-west8 |
Milan | ||
europe-west9 |
Paris
|
||
europe-west12 |
Turin | ||
Asia Pacific | |||
asia-east1 |
Taiwan | ||
asia-east2 |
Hong Kong | ||
asia-northeast1 |
Tokyo | ||
asia-northeast2 |
Osaka | ||
asia-northeast3 |
Seoul | ||
asia-south1 |
Mumbai | ||
asia-south2 |
Delhi | ||
asia-southeast1 |
Singapore | ||
asia-southeast2 |
Jakarta | ||
australia-southeast1 |
Sydney | ||
australia-southeast2 |
Melbourne | ||
Middle East | |||
me-central1 |
Doha | ||
me-west1 |
Tel Aviv |
Replication
Base regional configurations contain three read-write replicas. Every Spanner mutation requires a write quorum that's composed of a majority of voting replicas. Write quorums are formed from two out of the three replicas in regional configurations. For more information about leader regions and voting replicas, see Replication.
You can create a custom regional instance configuration and add optional read-only replicas. Read-only replicas can help scale reads and support low latency stale reads. These read-only replicas do not take part in the write quorums, and they don't affect the Spanner >= 99.99% SLA for regional instances. For more information, see Read-only replicas.
Performance best practices for regional configurations
For optimal performance, follow these best practices:
- Design a schema that prevents hotspots and other performance issues.
- Place critical compute resources within the same region as your Spanner instance.
- Provision enough compute capacity to keep high priority total CPU utilization under 65%.
- For the amount of throughput per Spanner node, see performance for regional configurations.
Multi-region configurations
Spanner regional configurations replicate data between multiple zones within a single region. However, a regional configuration might not be optimal in the following situations:
- If your application often needs to read data from multiple geographic locations (for example, to serve data to users in both North America and Asia)
- If your writes originate from a different location than your reads (for example, if you have large write workloads in North America and large read workloads in Europe)
Multi-region configurations have other benefits such as:
- Maintain availability in the case of regional failures.
- Provide higher availability and SLAs than regional configurations.
- Serve writes from multiple regions.
Multi-region configurations allow you to replicate the database's data not just in multiple zones, but in multiple zones across multiple regions, as defined by the instance configuration. These additional replicas enable you to read data with low latency from multiple locations close to or within the regions in the configuration. There are trade-offs though, because in a multi-region configuration, the quorum (read-write) replicas are spread across more than one region. Hence, they can incur additional network latency when these replicas communicate with each other to vote on writes. In other words, multi-region configurations enable your application to achieve faster reads in more places at the cost of a small increase in write latency.
Available configurations
Spanner offers the following base multi-region instance configurations:
One continent
Base Configuration Name | Location | Read-Write Regions | Read-Only Regions | Witness Region | Optional Region |
---|---|---|---|---|---|
asia1 |
Asia | Tokyo: asia-northeast1 L,2R Osaka: asia-northeast2 2R |
None | Seoul: asia-northeast3 |
|
eur3 |
Europe | Belgium: europe-west1 L,2R Netherlands: europe-west4 2R |
None | Finland: europe-north1 |
|
eur5 |
Europe | London: europe-west2 L,2R Belgium: europe-west1 2R |
None | Netherlands: europe-west4 |
|
eur6 |
Europe | Netherlands: europe-west4 L,2R Frankfurt: europe-west3 2R |
None | Zurich: europe-west6 |
Read-only: us-east1 2R |
nam3 |
North America | Northern Virginia: us-east4 L,2R South Carolina: us-east1 2R |
None | Iowa: us-central1 |
Read-only: us-west2 1R |
nam6 |
North America | Iowa: us-central1 L,2R South Carolina: us-east1 2R |
Oregon: us-west1 1R Los Angeles: us-west2 1R |
Oklahoma: us-central2 |
|
nam7 |
North America | Iowa: us-central1 L,2R Northern Virginia: us-east4 2R |
None | Oklahoma: us-central2 |
Read-only: us-east1 2R us-south1 1R europe-west1 2R |
nam8 |
North America | Los Angeles: us-west2 L,2R Oregon: us-west1 2R |
None | Salt Lake City: us-west3 |
|
nam9 |
North America | Northern Virginia: us-east4 L,2R Iowa: us-central1 2R |
Oregon: us-west1 2R |
South Carolina: us-east1 |
|
nam10 |
North America | Iowa: us-central1 L,2R Salt Lake City: us-west3 2R |
None | Oklahoma: us-central2 |
|
nam11 |
North America | Iowa: us-central1 L,2R South Carolina: us-east1 2R |
None | Oklahoma: us-central2 |
Read-only: us-west1 1R |
nam12 |
North America | Iowa: us-central1 L,2R Northern Virginia: us-east4 2R |
Oregon: us-west1 2R |
Oklahoma: us-central2 |
|
nam13 |
North America | Oklahoma: us-central2 L,2R Iowa: us-central1 2R |
None | Salt Lake City: us-west3 |
|
nam14 |
North America | Northern Virginia: us-east4 L,2R Montréal: northamerica-northeast1 2R |
None | South Carolina: us-east1 |
|
nam15 |
North America | Dallas: us-south1 L,2R Northern Virginia: us-east4 2R |
None | Iowa: us-central1 |
Three continents
Base Configuration Name | Locations | Read-Write Regions | Read-Only Regions | Witness Region |
---|---|---|---|---|
nam-eur-asia1 |
North America Europe Asia |
Iowa: us-central1 L,2R Oklahoma: us-central2 2R |
Belgium: europe-west1 2R Taiwan: asia-east1 2R |
South Carolina: us-east1 |
nam-eur-asia3 |
North America Europe Asia |
Iowa: us-central1 L,2R South Carolina: us-east1 2R |
Belgium: europe-west1 1R Netherlands: europe-west4 1R Taiwan: asia-east1 2R |
Oklahoma: us-central2 |
L: default leader region. See also Modify the leader region of a database.
1R: 1 replica in the region
2R: 2 replicas in the region
Benefits
Multi-region instances offer these primary benefits:
99.999% availability, which is greater than the 99.99% availability that Spanner regional configurations provide.
Data distribution: Spanner automatically replicates your data between regions with strong consistency guarantees. This allows your data to be stored where it's used, which can reduce latency and improve the user experience.
External consistency: Even though Spanner replicates across geographically distant locations, you can still use Spanner as if it were a database running on a single machine. Transactions are guaranteed to be serializable, and the order of transactions within the database is the same as the order in which clients observe the transactions to have been committed. External consistency is a stronger guarantee than "strong consistency," which is offered by some other products. Read more about this property in TrueTime and external consistency.
Replication
Each base multi-region configuration contains two regions that are designated as read-write regions, each of which contains two read-write replicas. One of these read-write regions is designated as the default leader region, which means that it contains your database's leader replicas. Spanner also places a witness replica in a third region called a witness region.
Each time a client issues a mutation to your database, a write quorum forms, consisting of one of the replicas from the default leader region and any two of the additional four voting replicas. (The quorum could be formed by replicas from two or three of the regions that make up your configuration, depending on which other replicas participate in the vote.) In addition to these five voting replicas, some base multi-region configurations contain read-only replicas for serving low-latency reads. The regions that contain read-only replicas are called read-only regions.
In general, the voting regions in a multi-region configuration are placed geographically close—less than a thousand miles apart—to form a low-latency quorum that enables fast writes (learn more). However, the regions are still far enough apart—typically, at least a few hundred miles—to avoid coordinated failures.
You can create a custom multi-region instance configuration with optional read-only replicas. Any custom read-only replicas you create cannot be included in write quorums. For more information, see Read-only replicas.
Performance best practices for multi-region configurations
For optimal performance, follow these best practices:
- Design a schema that prevents hotspots and other performance issues.
- For optimal write latency, place compute resources for write-heavy workloads within or close to the default leader region.
- For optimal read performance outside of the default leader region, use staleness of at least 15 seconds.
- To avoid single-region dependency for your workloads, place critical compute resources in at least two regions. A good option is to place them next to the two different read-write regions so that any single region outage will not impact all of your application.
- Provision enough compute capacity to keep high priority total CPU utilization under 45% in each region.
- For the amount of throughput per Spanner node, see performance for multi-region configurations.
Region types
Read-write regions
Each multi-region configuration contains two read-write regions, each of which contains two read-write replicas.
One of these read-write regions is designated the default leader region. A leader is selected from the replicas in the default leader region for each split. In the event of a leader replica failure, the other replica in the default leader region automatically assumes leadership. In fact, leaders run health checks on themselves and can preemptively give up leadership if they detect they are unhealthy. In most cases, when the default leader region returns to a healthy state, it automatically re-assumes the leadership.
Writes are first processed in the default leader region. You can monitor the
percentage of replicas within a given region by
using the instance/leader_percentage_by_region
monitoring metric. For more
information, see Spanner metrics.
The second read-write region contains additional replicas that serve reads and participate in voting to commit writes. These additional replicas in the second read-write region are eligible to be leaders. In the unlikely event of the loss of all replicas in the default leader region, new leader replicas are chosen from the second read-write region.
You can configure the leader region of a database by following the instructions at Change the leader region of a database. For more information, see Configure the default leader region.
Read-only regions
Read-only regions contain read-only replicas, which can serve low-latency reads to clients that are outside of the read-write regions. Read-only replicas maintain a full copy of your data, which is replicated from read-write replicas. They do not participate in voting to commit writes so they never contribute to any write latency.
Some base multi-region configurations contain read-only replicas. You can also create a custom instance configuration and add read-only replicas to your custom instance configurations to scale reads and support low latency stale reads. All read-only replicas are subject to compute capacity and database storage costs. Furthermore, adding read-only replicas to an instance configuration doesn't change the Spanner SLAs of the instance configuration. For more information, see Read-only replicas.
Witness regions
A witness region contains a witness replica, which is used to vote on writes. Witnesses become important in the rare event that the read-write regions become unavailable.
Move an instance
You can move your Spanner instance from any instance configuration to any other instance configuration, including between regional and multi-region configurations. Moving your instance does not cause downtime, and Spanner continues to provide the usual transaction guarantees, including strong consistency, during the move.
To learn more about Spanner instance move, see Move an instance.
Configure the default leader region
To change the location of your database's default leader region to be closer to connecting clients to reduce application latency, you can change the leader region for any Spanner instance that uses a multi-region configuration. For instructions on changing the location of the leader region, see Change the leader region of a database. The only regions eligible to become the default leader region for your database are the read-write regions in your multi-region configuration.
The leader region is responsible for handling all database writes, therefore if most of your traffic comes from one geographic region, you can move it to that region to reduce latency. Updating the default leader region is cheap and does not involve any data moves. The new value takes a few minutes to take effect.
Changing the default leader region is a schema change, which uses a long-running operation. If needed, you can Get the status of the long-running operation.
Trade-offs: regional versus multi-region configurations
Configuration | Availability | Latency | Cost | Data Locality |
---|---|---|---|---|
Regional | 99.99% | Lower write latencies within region. | Lower cost; see pricing. | Enables geographic data governance. |
Multi-region | 99.999% | Lower read latencies from multiple geographic regions. | Higher cost; see pricing. | Distributes data across multiple regions within the configuration. |
What's next
- Learn how to create a Spanner instance.
- Learn more about Google Cloud regions and zones.