This page describes production quotas and limits for Cloud Spanner. The difference between a quota and a limit is that a limit cannot be adjusted, whereas you can request a quota increase. Quota and limit may be used interchangeably in Google Cloud console. If Google Cloud console indicates edit is not allowed for a quota, it is actually a limit that cannot be adjusted.
The quota and limit values are subject to change.
Checking your quotas
To check the current quotas for resources in your project, use the Google Cloud console:
Increasing your quotas
As your use of Cloud Spanner expands over time, your quotas can increase accordingly. If you expect a notable upcoming increase in usage, you should make your request a few days in advance to ensure your quotas are adequately sized.
Go to the Quotas page in the Google Cloud console.
Select Cloud Spanner API in the Service dropdown list.
If you do not see Cloud Spanner API, the Cloud Spanner API has not been enabled.
Select the quotas you want to change.
Click Edit Quotas.
Fill in your name, email, and phone number and click Next.
Fill in your quota request and click Submit request.
You will receive a response from the Cloud Spanner team within 48 hours of your request.
|Instance ID length||2 to 64 characters|
|Databases per instance||
|Database ID length||2 to 30 characters|
Note that Cloud Spanner bills for actual storage utilized within an instance, and not its total available storage.
Backup and restore limits
|Number of ongoing create backup operations per database||1|
|Number of ongoing restore database operations per instance (in the instance of the restored database, not the backup)||1|
|Maximum retention time of backup||1 year (including the extra day in a leap year)|
|DDL statement size for a single schema change||10 MB|
|DDL statement size for a database's entire schema, as returned by
|Tables per database||5,000|
|Table name length||1 to 128 characters|
|Columns per table||1,024|
|Column name length||1 to 128 characters|
|Size of data per column||10 MB|
|Number of columns in a table key||
Includes key columns shared with any parent table
|Table interleaving depth||
A top-level table with child table(s) has depth 1.
A top-level table with grandchild table(s) has depth 2, and so on.
|Total size of a table or index key||
Includes the size of all columns that make up the key
|Total size of non-key columns||
Includes the size of all non-key columns for a table
|Indexes per database||10,000|
|Indexes per table||32|
|Index name length||1 to 128 characters|
|Number of columns in an index key||
The number of indexed columns (except for STORING columns) plus the number of primary key columns in the base table
|Views per database||5,000|
|View name length||1 to 128 characters|
A view that refers to another view has nesting depth 1. A view that refers to another view that refers to yet another view has nesting depth 2, and so on.
|Columns in a
|Nested function calls||75|
|Nested subquery expressions||25|
|Nested subselect statements||60|
|Query statement length||1 million characters|
|Subquery expression children||50|
|Unions in a query||200|
Limits for creating, reading, updating, and deleting data
|Commit size (including indexes and change streams)||100 MB|
|Concurrent reads per session||100|
|Mutations per commit (including indexes)2||20,000|
|Concurrent Partitioned DML statements per database||20,000|
|Administrative actions request size3||1 MB|
|Rate limit for administrative actions4||
5 per second per project per user
(averaged over 100 seconds)
|Request size other than for commits5||10 MB|
Change stream limits
|Change streams per database||10|
|Change streams watching any given column6||3|
|Concurrent readers per change stream data partition7||5|
1. To provide high availability and low latency for accessing a database, Cloud Spanner defines storage limits based on the compute capacity of the instance:
- For instances smaller than 1 node (1000 processing units), Cloud Spanner allots 409.6 GB of data for every 100 processing units in the database.
- For instances of 1 node and larger, Cloud Spanner allots 4 TB of data for each node.
For example, to create an instance for a 600 GB database, you need to set its compute capacity to 200 processing units. This amount of compute capacity will keep the instance below the limit until the database grows to more than 819.2 GB. After the database reaches this size, you need to add another 100 processing units to allow the database to grow. Otherwise, writes to the database may be rejected. For more information, see Recommendations for database storage utilization.
For a smooth growth experience, add compute capacity before the limit is reached for your database.
2. Insert and update operations count with the multiplicity of the number of
columns they affect, and primary key columns are always affected. For example,
inserting a new record may count as five mutations, if values are inserted
into five columns. Updating three columns in a record may also count as five
mutations if the record has two primary key columns. Delete and delete range
operations count as one mutation regardless of the number of columns affected.
Deleting a row from a parent table that has the
CASCADE annotation is also counted as one mutation regardless of
the number of interleaved child rows present. The exception to this is if
there are secondary indexes defined on rows being deleted, then the changes to
the secondary indexes will be counted individually. For example, if a table
has 2 secondary indexes, deleting a range of rows in the table will count as 1
mutation for the table, plus 2 mutations for each row that is deleted because
the rows in the secondary index might be scattered over the key-space, making
it impossible for Cloud Spanner to call a single delete range operation on the
secondary indexes. Secondary indexes include the
To find the mutation count for a transaction, see Retrieving commit statistics for a transaction.
Change streams do not add any mutations that count towards this limit.
3. The limit for an administrative action request excludes commits, requests listed in note 5, and schema changes.
4. This rate limit includes all calls to the admin API, which includes calls to poll long-running operations on an instance, database, or backup.
5. This limit includes requests for creating a database, updating a database, reading, streaming reads, executing SQL queries, and executing streaming SQL queries.
6. A change stream that watches an entire table or database implicitly watches every column in that table or database, and therefore counts towards this limit.
7. This limit applies to concurrent readers of the same change streams partition, whether the readers are Dataflow pipelines or direct API queries.