Compute capacity, nodes and processing units

This page describes Cloud Spanner compute capacity and the two units of measure used to quantify it: nodes and processing units.

Compute capacity

Compute capacity defines amount of server and storage resources that are available to the databases in an instance. When you create an instance, you specify its compute capacity as a number of processing units or as a number of nodes, with 1000 processing units being equal to 1 node.

Which measurement unit you use does not matter unless you are creating an instance whose compute capacity is smaller than 1000 processing units (1 node); in this case, you must use processing units to specify the compute capacity of the instance.

When defining compute capacity, either when first creating an instance or later when increasing or decreasing its capacity, you specify quantities up to 1000 processing units in multiples of 100 processing units (100, 200, 300 and so on) and you specify greater quantities in multiples of 1000 processing units (1000, 2000, 3000 and so on) or as nodes (1, 2, 3 and so on).

Data storage limits

As detailed in Quotas & limits, to provide high availability and low latency for accessing a database, Cloud Spanner defines storage limits based on the compute capacity of an instance:

  • For instances smaller than 1 node (1000 processing units), Cloud Spanner allots 204.8 GB (≈205 GB) of data for every 100 processing units in the database.
  • For instances of 1 node and larger, Cloud Spanner allots 2 TB of data for each node.

For example, to create an instance for a 300 GB database, you need to set its compute capacity to 200 processing units. This amount of compute capacity will keep the instance below the limit until the database grows to more than 409.6 GB. After the database reaches this size, you need to add another 100 processing units to allow the database to grow. Otherwise, writes to the database may be rejected. For more information, see Recommendations for database storage utilization.


The peak read and write throughput values that a given amount of compute capacity can provide depend on the instance configuration, as well as on schema design and dataset characteristics. Refer to the regional configuration performance and multi-region configuration performance sections for details.

Compute capacity and instance configurations

As described in Regional and multi-region configurations, Cloud Spanner distributes an instance across zones of one or more regions to provide high performance and high availability. Consequently, the server resources provided by the instance's compute capacity are likewise distributed.

Here is a diagram that illustrates this distribution of server resources.

Two instances created in a regional instance configuration

This diagram depicts two instances that have regional configurations:

  • Instance-A shows an instance of 1000 processing units (1 node) with its distributed compute capacity consuming server resources in each of the three zones.
  • Instance-B shows an instance of 2000 processing units (2 nodes) with its distributed compute capacity consuming server resources in each of the three zones.

Note the following in this diagram:

  • For each instance, server resources are allocated in each zone of the regional configuration. Each per-zone server resource uses the data replica in its zone. For information about data replicas in instance configurations, see Regional and multi-region configurations. For information about how Cloud Spanner keeps these data replicas in sync, see Replication.

  • The server resources for Instance-A are shown in simple boxes, while the resources for Instance-B are shown in boxes subdivided into two parts. This difference illustrates that Cloud Spanner allocates server resources differently for different-sized instances:

    • For instances of 1000 processing units (1 node) and smaller, Cloud Spanner allocates server resources in a single server task per zone.
    • For instances larger than 1000 processing units (1 node), Cloud Spanner allocates server resources in multiple server tasks per zone, with one task for each 1000 processing units. Using multiple server tasks per zone provides better performance and enables Cloud Spanner to create database splits and provide even better performance.

Increasing and decreasing compute capacity

After you create an instance, you can increase its compute capacity later. In most cases, you can also decrease compute capacity. There are a few cases in which you cannot decrease compute capacity:

  • Removing compute capacity would require your instance to store more than 2 TB of data per 1000 processing units (1 node).
  • Based on your historic usage patterns, Cloud Spanner has created a large number of splits for your instance's data, and Cloud Spanner would not be able to manage the splits after removing compute capacity.

When removing compute capacity, monitor your CPU utilization and request latencies in Cloud Monitoring to ensure CPU utilization stays below 65% for regional instances and 45% for each region in multi-region instances. You may experience a temporary increase in request latencies while removing compute capacity.

You can use the Cloud Console, the gcloud command-line tool, or the client libraries to change compute capacity.

Cloud Spanner does not have a suspend mode. Cloud Spanner compute capacity is a dedicated resource and, even when you are not running a workload, Cloud Spanner frequently performs background work to optimize and protect your data.

Compute capacity versus replicas

If you need to scale up the server and storage resources in your instance, increase the compute capacity of the instance. Note that increasing compute capacity does not increase the number of replicas (which are fixed for a given instance configuration), but rather increases the resources each replica has in the instance. Increasing compute capacity gives each replica more CPU and RAM, which increases the replica's throughput (that is, more reads and writes per second can occur).

What's next