Quotas and limits

This document lists the quotas and limits that apply to Bigtable .

A quota restricts how much of a shared Google Cloud resource your Google Cloud project can use, including hardware, software, and network components. Therefore, quotas are a part of a system that does the following:

  • Monitors your use or consumption of Google Cloud products and services.
  • Restricts your consumption of those resources, for reasons that include ensuring fairness and reducing spikes in usage.
  • Maintains configurations that automatically enforce prescribed restrictions.
  • Provides a means to request or make changes to the quota.

In most cases, when a quota is exceeded, the system immediately blocks access to the relevant Google resource, and the task that you're trying to perform fails. In most cases, quotas apply to each Google Cloud project and are shared across all applications and IP addresses that use that Google Cloud project.

To increase or decrease most quotas, use the Google Cloud console. For more information, see Request a higher quota.

There are also limits on Bigtable resources. These limits are unrelated to the quota system. Limits cannot be changed unless otherwise stated.

Quotas

This section describes default quotas that apply to all of your Bigtable usage.

Admin operation quotas

The following quotas affect the number of Bigtable administrative operations (calls to the admin API) that you can perform within a given time.

In general, you are not able to request an increase in admin operation quotas, except where indicated below. Exceptions are sometimes granted when strong justification is provided. However, the number of calls that your application makes to the admin API should not increase when usage increases. If this occurs, it is often a sign that your application code is making unnecessary calls to the admin API, and you should change your application instead of requesting an admin operation quota increase.

Daily quotas reset at midnight Pacific Time.

Name Description Default quota
Instances and clusters
Instance and cluster read requests Reading the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or reading a list of instances

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Instance and cluster write requests Changing the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or creating a new instance

Per day per project: 500 ops

Per minute per user: 100 ops

Application profiles
App profile read requests Reading the configuration for an app profile

Per minute per project: 5,000 ops

Per minute per user: 1,000 ops

App profile write requests Changing the configuration for an app profile

Per minute per project: 500 ops

Per minute per user: 100 ops

Tables
Table read requests Reading the configuration for a table (for example, details about its column families), or reading a list of tables

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Table write requests Changing the configuration for a table (for example, the garbage collection settings for a column family)

Per day per project: 5,000 ops

Per minute per user: 100 ops

DropRowRange method Delete a range of rows from a table in a single operation.

Per day per project: 5,000 ops

Per minute per user: 100 ops

Backups
Backup operations Creating, updating, and deleting a backup.

Per day per project:1,000 ops

Per minute per user: 10 ops1

Backup retrieval requests Getting and listing backups.

Per day per project: 864,000 ops

RestoreTable method Restoring a backup to a new table.

Per day per project: 5,000 ops

Per minute per user: 100 ops

Identity and Access Management
Fine-grained ACL get requests Reading information about the IAM policy for a Bigtable instance, or testing the IAM permissions for an instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Fine-grained ACL set requests Changing the IAM policy for a Bigtable instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

  1. Eligible for quota limit increase.

Node quotas

A Google Cloud project contains Bigtable instances, which are containers for clusters. A cluster represents the actual Bigtable service running in a single zone. Clusters contain nodes, which are compute resources that enable Bigtable to manage your data.

The default number of nodes that you can provision per zone in each project depends on the region. You are able to provision up to the default number of HDD nodes and up to the default number of SSD nodes per zone in a project.

The default node quotas are as follows:

Region SSD HDD
asia-east1 100 100
europe-west1 200 200
us-central1 200 200
us-east1 50 50
us-east4 50 50
us-west1 100 100
All other Bigtable locations 30 30

If you enable autoscaling for a cluster, the configured maximum number of nodes counts toward this limit, even if the cluster is not scaled to that number of nodes. If you need to provision more nodes than the default limits, you can request an increase.

Quotas and node availability

Node quota is the maximum number of nodes that you can provision per zone in each project. Quotas do not guarantee that you are always able to add nodes to a cluster. If a zone is out of nodes, you might not be able to add nodes to a cluster in that zone, even if you have remaining quota in your project.

For example, if you attempt to add 10 SSD nodes to a cluster that already has 20 nodes, but the zone is out of nodes, you are not able to add those 10 nodes, even if the node quota for SSD nodes in that region is 30.

In these situations, we attempt to increase a zone's node resources and then grant your requests after those resources are available, with no guarantee of timing and completion.

Nodes that you have provisioned are always guaranteed to be available.

View quota information

To find the number of SSD and HDD nodes that your Google Cloud project already has in each zone, use the Google Cloud console. In the left navigation pane, point to IAM & admin, click Quotas, and then use the Service drop-down to select the Bigtable Admin API service.

The page displays rows showing quotas for each combination of service, node type, and location. Look for the rows that are subtitled SSD nodes per zone or HDD nodes per zone. The Limit column shows the maximum number of nodes allowed for the given node type and location, and the Current usage column shows the number of nodes that currently exist. The difference between those two numbers is the number of nodes you can add without requesting more.

Request a node quota increase

To ensure that there is enough time to process your request, always plan ahead and request additional resources a few days before you might need them. Requests for node quota increases are not guaranteed to be granted. For more information, see Working with quotas.

You must have at least editor-level permissions on the project that contains the instance you are requesting a node quota increase for.

There is no charge for requesting an increase in node quota. Your costs increase only if you use more resources.

  1. Go to the Quotas page.

    Go to the Quotas page

  2. On the Quotas page, select the quotas you want to change.
  3. Click the Edit Quotas button on the top of the page.
  4. In the right pane, type your name, email, and phone number, then click Next.
  5. Enter the requested new quota limit, then click Next.
  6. Submit your request.

Limits

This section describes limits that apply to your usage of Bigtable. Limits are built into the service and cannot be changed.

App profiles per instance

The maximum number of application profiles each instance can have is 2,000.

Backups

  • Maximum number of backups that can be created: 150 per table per cluster
  • Minimum retention period of a backup: 6 hours after initial creation time
  • Maximum retention period of a backup: 90 days after initial creation date

Data size within tables

Use these best practices when deciding the size of your data.

Recommended limits

Design your schema to keep the size of your data under these recommended limits.

  • Column families per table: 100
  • A single column qualifier: 16 KB
  • A single value in a table cell: 10 MB
  • All values in a single row: 100 MB

Hard limits

In addition, you must ensure that your data fits within these hard limits:

  • A single row key: 4 KB
  • A single value in a table cell: 100 MB
  • All values in a single row: 256 MB

These size limits are measured in binary kilobytes (KB), where 1 KB is 210 bytes, and binary megabytes (MB), where 1 MB is 220 bytes. These units of measurement are also known as kibibytes (KiB) and mebibytes (MiB).

Operation limits

When you send multiple mutations to Bigtable as a single batch, the following limits apply:

  • A batch of conditional mutations, which call CheckAndMutate, can include up to 100,000 true mutations and up to 100,000 false mutations in the batch.

  • In batches of all other types of mutations, you can include no more than 100,000 mutations in the batch.

Regions per instance

A Bigtable instance can have clusters in up to 8 regions where Bigtable is available. You can create one cluster in each zone in a region. For a list of available zones, see Bigtable locations.

Row filters

A row filter cannot exceed 20 KB. If you receive an error message, you should redesign or shorten your filter.

Storage per node

If a cluster does not have enough nodes, based on its current workload and the amount of data it stores, Bigtable will not have enough CPU resources to manage all of the tablets that are associated with the cluster. Bigtable will also not be able to perform essential maintenance tasks in the background. As a result, the cluster may not be able to handle incoming requests, and latency will go up. See Trade-offs between storage usage and performance for more details.

To prevent these issues, monitor storage utilization for your clusters to make sure they have enough nodes to support the amount of data in the cluster, based on the following limits:

  • SSD clusters: 5 TB per node
  • HDD clusters: 16 TB per node

These values are measured in binary terabytes (TB), where 1 TB is 240 bytes. This unit of measurement is also known as a tebibyte (TiB).

As a best practice, add enough nodes to your cluster so you are only using 70% of these limits, which helps accommodate any sudden spikes in storage usage. For example, if you are storing 50 TB of data in a cluster that uses SSD storage, you should provision at least 15 nodes, which will handle up to 75 TB of data. If you are not adding significant amounts of data to the cluster, you can exceed this recommendation and store up to 100% of the limit.

Tables per instance

Bigtable supports a maximum of 1,000 tables in each instance.

ID length limits

The following are the minimum and maximum ID lengths (number of characters) supported by Bigtable.

  • App profile: 1-50
  • Backup: 1-50
  • Cluster: 6-30
  • Column family: 1-64
  • Instance: 6-33
  • Table: 1-50

Usage policies

The use of this service must adhere to the Terms of Service as well as Google's Privacy Policy.