This page describes production quotas and limits for Spanner. Quota and limit might be used interchangeably in the Google Cloud console.
The quota and limit values are subject to change.
Permissions to check and edit quotas
To view your quotas, you must have the serviceusage.quotas.get
Identity and Access Management (IAM) permission.
To change your quotas, you must have the
serviceusage.quotas.update
IAM permission. This permission is included by default for the
following predefined roles: Owner, Editor, and
Quota Administrator.
These permissions are included by default in the basic IAM roles Owner and Editor, and in the predefined Quota Administrator role.
Check your quotas
To check the current quotas for resources in your project, use the Google Cloud console:
Increase your quotas
As your use of Spanner expands over time, your quotas can increase accordingly. If you expect a notable upcoming increase in usage, you should make your request a few days in advance to ensure that your quotas are adequately sized.
You might also need to increase your consumer quota override. For more information, see Creating a consumer quota override.
You can increase your current Spanner instance configuration node limit by using the Google Cloud console.
Go to the Quotas page.
Select Spanner API in the Service drop-down list.
If you don't see Spanner API, the Spanner API has not been enabled. For more information, see Enabling APIs.
Select the quotas that you want to change.
Click Edit Quotas.
In the Quota changes panel that appears, enter your new quota limit.
Click Done, then Submit request.
If you're unable to increase your node limit to your desired limit manually, click apply for higher quota. Fill out the form to submit a request to the Spanner team. You will receive a response within 48 hours of your request.
Increase your quota for a custom instance configuration
You can increase the node quota for your custom instance configuration.
Check the node limit of a custom instance configuration by checking the node limit of the base instance configuration.
Use the show instance configurations detail command if you don't know or remember the base configuration of your custom instance configuration.
If the node limit required for your custom instance configuration is less than 85, follow the instructions in the previous Increase your quotas section. Use the Google Cloud console to increase the node limit of the base instance configuration associated with your custom instance configuration.
If the node limit required for your custom instance configuration is more than 85, fill out the Request a Quota Increase for your Spanner Nodes form. Specify the ID of your custom instance configuration in the form.
Node limits
Value | Limit |
---|---|
Nodes per instance configuration |
Default limits vary by project and instance configuration. To change project quota limits or request a limit increase, see Increase your quotas. |
Instance limits
Value | Limit |
---|---|
Instance ID length | 2 to 64 characters |
Free trial instance limits
A Spanner free trial instance has the following additional limits. To raise or remove these limits, upgrade your free trial instance to a paid instance.
Value | Limit |
---|---|
Storage capacity | 10 GB |
Database limit | Create up to five databases |
Unsupported features | Backup and restore |
SLA | No SLA guarantees |
Trial duration | 90-day free trial period |
Instance configuration limits
Value | Limit |
---|---|
Maximum custom instance configurations per project | 100 |
Custom instance configuration ID length | 8 to 64 characters A custom instance configuration ID must start with |
Geo-partitioning limits
Value | Limit |
---|---|
Maximum number of partitions per instance | 10 |
Maximum number of placement rows per node in your partition | 100 million |
Database limits
Value | Limit |
---|---|
Databases per instance |
|
Roles per database | 100 |
Database ID length | 2 to 30 characters |
Storage size1 |
Backup and restore limits
Value | Limit |
---|---|
Number of ongoing create backup operations per database | 1 |
Number of ongoing restore database operations per instance (in the instance of the restored database, not the backup) | 10 |
Maximum retention time of backup | 1 year (including the extra day in a leap year) |
Schema limits
DDL statements
Value | Limit |
---|---|
DDL statement size for a single schema change | 10 MB |
DDL statement size for a database's entire schema, as returned by GetDatabaseDdl |
10 MB |
Tables
Value | Limit |
---|---|
Tables per database | 5,000 |
Table name length | 1 to 128 characters |
Columns per table | 1,024 |
Column name length | 1 to 128 characters |
Size of data per cell | 10 MiB |
Size of a STRING cell |
2,621,440 Unicode characters |
Number of columns in a table key | 16 Includes key columns shared with any parent table |
Table interleaving depth | 7 A top-level table with child table(s) has depth 1. A top-level table with grandchild table(s) has depth 2, and so on. |
Total size of a table or index key | 8 KB Includes the size of all columns that make up the key |
Total size of non-key columns | 1600 MB Includes the size of all non-key columns for a table |
Indexes
Value | Limit |
---|---|
Indexes per database | 10,000 |
Indexes per table | 128 |
Index name length | 1 to 128 characters |
Number of columns in an index key | 16 The number of indexed columns (except for STORING columns) plus the number of primary key columns in the base table |
Views
Value | Limit |
---|---|
Views per database | 5,000 |
View name length | 1 to 128 characters |
Nesting depth | 10 A view that refers to another view has nesting depth 1. A view that refers to another view that refers to yet another view has nesting depth 2, and so on. |
Query limits
Value | Limit |
---|---|
Columns in a GROUP BY clause |
1,000 |
Values in an IN operator |
10,000 |
Function calls | 1,000 |
Joins | 20 |
Nested function calls | 75 |
Nested GROUP BY clauses |
35 |
Nested subquery expressions | 25 |
Nested subselect statements | 60 |
Parameters | 950 |
Query statement length | 1 million characters |
STRUCT fields |
1,000 |
Subquery expression children | 50 |
Unions in a query | 200 |
Limits for creating, reading, updating, and deleting data
Value | Limit |
---|---|
Commit size (including indexes and change streams) | 100 MB |
Concurrent reads per session | 100 |
Mutations per commit (including indexes)2 | 80,000 |
Concurrent Partitioned DML statements per database | 20,000 |
Administrative limits
Value | Limit |
---|---|
Administrative actions request size3 | 1 MB |
Rate limit for administrative actions4 | 5 per second per project per user (averaged over 100 seconds) |
Request limits
Value | Limit |
---|---|
Request size other than for commits5 | 10 MB |
Change stream limits
Value | Limit |
---|---|
Change streams per database | 10 |
Change streams watching any given non-key column6 | 3 |
Concurrent readers per change stream data partition7 | 5 |
Data Boost limits
Value | Limit |
---|---|
Concurrent Data Boost requests per project in us-central1 | 1000 8 |
Concurrent Data Boost requests per project per region in other regions | 400 8 |
Notes
1. To provide high availability and low latency for accessing a database, Spanner defines storage limits based on the compute capacity of the instance:
- For instances smaller than 1 node (1000 processing units), Spanner allots 409.6 GB of data for every 100 processing units in the database.
- For instances of 1 node and larger, Spanner allots 4 TB of data for each node.
For example, to create an instance for a 600 GB database, you need to set its compute capacity to 200 processing units. This amount of compute capacity will keep the instance below the limit until the database grows to more than 819.2 GB. After the database reaches this size, you need to add another 100 processing units to allow the database to grow. Otherwise, writes to the database may be rejected. For more information, see Recommendations for database storage utilization.
For a smooth growth experience, add compute capacity before the limit is reached for your database.
2. Insert and update operations count with the multiplicity of the number of
columns they affect, and primary key columns are always affected. For example,
inserting a new record may count as five mutations, if values are inserted
into five columns. Updating three columns in a record may also count as five
mutations if the record has two primary key columns. Delete and delete range
operations count as one mutation regardless of the number of columns affected.
Deleting a row from a parent table that has the
ON DELETE
CASCADE
annotation is also counted as one mutation regardless of
the number of interleaved child rows present. The exception to this is if
there are secondary indexes defined on rows being deleted, then the changes to
the secondary indexes will be counted individually. For example, if a table
has 2 secondary indexes, deleting a range of rows in the table will count as 1
mutation for the table, plus 2 mutations for each row that is deleted because
the rows in the secondary index might be scattered over the key-space, making
it impossible for Spanner to call a single delete range operation on the
secondary indexes. Secondary indexes include the
foreign keys
backing indexes.
To find the mutation count for a transaction, see Retrieving commit statistics for a transaction.
Change streams don't add any mutations that count towards this limit.
3. The limit for an administrative action request excludes commits, requests listed in note 5, and schema changes.
4. This rate limit includes all calls to the admin API, which includes calls to poll long-running operations on an instance, database, or backup.
5. This limit includes requests for creating a database, updating a database, reading, streaming reads, executing SQL queries, and executing streaming SQL queries.
6. A change stream that watches an entire table or database implicitly watches every column in that table or database, and therefore counts towards this limit.
7. This limit applies to concurrent readers of the same change streams partition, whether the readers are Dataflow pipelines or direct API queries.
8. Default limits vary by project and regions. For more information, see Monitor and manage Data Boost quota usage.