This page describes production limits for Cloud Spanner.
These values are subject to change.
Checking your quotas
To check the current quotas for resources in your project, use the Google Cloud Console:
Increasing your quotas
As your use of Cloud Spanner expands over time, your quotas can increase accordingly. If you expect a notable upcoming increase in usage, you should make your request a few days in advance to ensure your quotas are adequately sized.
Go to the Quotas page in the Cloud Console.
Select Cloud Spanner API in the Service dropdown list.
If you do not see Cloud Spanner API, the Cloud Spanner API has not been enabled.
Select the quotas you want to change.
Click Edit Quotas.
Fill in your name, email, and phone number and click Next.
Fill in your quota request and click Submit request.
You will receive a response from the Cloud Spanner team within 48 hours of your request.
|Instance ID length||2 to 64 characters|
|Databases per instance||100|
|Database ID length||2 to 30 characters|
|Storage size per node||2 TB1|
Backup and restore limits
|Number of ongoing create backup operations per database||1|
|Number of ongoing restore database operations per instance (in the instance of the restored database, not the backup)||1|
|Maximum retention time of backup||1 year (including the extra day in a leap year)|
|DDL statement size for a single schema change||10 MB|
|DDL statement size for a database's entire schema, as returned by
|Tables per database||2,048|
|Table name length||1 to 128 characters|
|Columns per table||1,024|
|Column name length||1 to 128 characters|
|Size of data per column||10 MB|
|Number of columns in a table key||
Includes key columns shared with any parent table
|Table interleaving depth||
A top-level table with child table(s) has depth 1.
A top-level table with grandchild table(s) has depth 2, and so on.
|Total size of a table or index key||
Includes the size of all columns that make up the key
|Indexes per database||4,096|
|Indexes per table||32|
|Index name length||1 to 128 characters|
|Number of columns in an index key||
The number of indexed columns (except for STORING columns) plus the number of primary key columns in the base table
|Columns in a
|Nested function calls||75|
|Nested subquery expressions||25|
|Nested subselect statements||60|
|Query statement length||1 million characters|
|Subquery expression children||40|
|Unions in a query||200|
Limits for creating, reading, updating, and deleting data
|Commit size (including indexes)||100 MB|
|Concurrent reads per session||100|
|Mutations per commit (including indexes)2||20,000|
|Concurrent Partitioned DML statements per database||20,000|
|Administrative actions request size3||1 MB|
|Rate limit for administrative actions4||
5 per second per project per user
(averaged over 100 seconds)
|Request size other than for commits5||10 MB|
1. To provide high availability and low latency for accessing a database, Cloud Spanner requires 1 node for every 2 TB of data in the database. For example, if your instance has 1 database that stores 3.5 TB of data, you need to provision at least 2 nodes. Those nodes will keep the instance below the limit until the database grows to 4 TB. After your database reaches that size, you need to add another node to allow the database to grow. Otherwise, writes to the database will fail. For a smooth growth experience, add nodes before this limit is reached for your database.
2. Insert and update operations count with the multiplicity of the number of columns they affect. For example, inserting values into one key column and four non-key columns count as five mutations total for the insert. Delete and delete range operations count as one mutation regardless of the number of columns affected.
3. The limit for an administrative action request excludes commits, requests listed in note 5, and schema changes.
4. This rate limit includes all calls to the admin API, which includes calls to poll long-running operations on an instance, database, or backup.
5. This limit includes requests for creating a database, updating a database, reading, streaming reads, executing SQL queries, and executing streaming SQL queries.