Quotas & Limits

Quota policy

Cloud Bigtable limits the maximum rate of incoming requests and enforces quotas for each of your Cloud Platform projects. Specific policies vary depending on resource availability, user profile, service usage history, and other factors, and are subject to change without notice.

Operations per Cloud Platform project

By default, the following quotas apply. When multiple operations are sent as a single batch, each operation in the batch counts towards your quota.

  • Limited to 10 operations per second per Cloud Platform project:
    • Reading a table's metadata (also known as "getting a table's descriptor")
    • Listing all tables in a cluster
  • Limited to 5,000 daily operations per Cloud Platform project, with a maximum of 1 qps:
    • Creating a table
    • Deleting a table
    • Renaming a table
    • Creating a column family
    • Updating a column family
    • Deleting a column family

Nodes per zone

A Cloud Platform project contains various Cloud Bigtable instances, which are wrappers for clusters. A cluster is the physical Cloud Bigtable service, which runs in a single zone. Clusters contain nodes, which are a single unit of compute power and memory in Cloud Bigtable.

By default, you can provision up to 30 Cloud Bigtable nodes per zone in each Cloud Platform project. Cloud Bigtable scales linearly with node counts, and the maximum number of nodes in a zone can be increased upon request using the node request form.

Amount of storage per node

For optimal availability and latency, you should add nodes to your Cloud Bigtable cluster based on the amount of data that you are storing in that cluster. Make sure you have enough nodes so that you do not exceed the following limits. These limits are measured in binary terabytes (TB), where 1 TB is 240 bytes. This unit of measurement is also known as a tebibyte (TiB).

  • SSD clusters: 2.5 TB per node
  • HDD clusters: 8 TB per node

You will see more consistent performance if you add nodes to your cluster before you reach these storage limits. As a best practice, add enough nodes to your cluster so that you are only using 70% of these limits. This configuration helps accommodate any sudden spikes in storage usage.

For example, if you are storing 50 TB of data in an SSD cluster, you should provision at least 29 nodes, which will handle up to 72.5 TB of data.

Size limits within a table

See Size Limits for details about size limits that apply to row keys, column families, column qualifiers, and values within a Cloud Bigtable table.

Requesting additional quota

Usage policies

The use of this service must adhere to the Terms of Service as well as Google's Privacy Policy.

Send feedback about...

Cloud Bigtable Documentation