Quotas & Limits

This page identifies the quotas and limits for Cloud Bigtable.

Operations per GCP project

By default, the following quotas apply to each GCP project. When multiple operations are sent as a single batch, each operation in the batch counts towards your quota.

  • Limited to 10 operations per second per GCP project:
    • Reading a table's metadata (also known as "getting a table's descriptor")
    • Listing all tables in a cluster
  • Limited to 5,000 daily operations per GCP project, with a maximum of 1 operation per second:
    • Creating a table
    • Deleting a table
    • Renaming a table
    • Creating a column family
    • Updating a column family
    • Deleting a column family

Instances, clusters, and nodes

A GCP project contains Cloud Bigtable instances, which are containers for clusters. A cluster represents the actual Cloud Bigtable service running in a single zone. Clusters contain nodes, which are compute resources that enable Cloud Bigtable to manage your data. Learn more about instances, clusters, and nodes.

The following limits apply to nodes within your Cloud Bigtable clusters.

Nodes per project

By default, you can provision up to 30 Cloud Bigtable nodes per zone in each GCP project. If you need to provision more nodes than the default limit, use the node request form.

Storage utilization per node

If a cluster does not have enough nodes, based on its current workload and the amount of data it stores, Cloud Bigtable will not have enough CPU resources to manage all of the tablets that are associated with the cluster. Cloud Bigtable will also not be able to perform essential maintenance tasks in the background. As a result, the cluster may not be able to handle incoming requests, and latency will go up.

To prevent these issues, monitor storage utilization for your clusters to make sure they have enough nodes to support the amount of data in the cluster, based on the following limits. These values are expressed in binary terabytes (TB), where 1 TB is 240 bytes. This unit of measurement is also known as a tebibyte (TiB).

  • SSD clusters: 2.5 TB per node
  • HDD clusters: 8 TB per node

As a best practice, add enough nodes to your cluster so you are only using 70% of these limits, which helps accommodate any sudden spikes in storage usage. For example, if you are storing 50 TB of data in a cluster that uses SSD storage, you should provision at least 29 nodes, which will handle up to 72.5 TB of data. If you are not adding significant amounts of data to the cluster, you can exceed this recommendation and store up to 100% of the limit.

Size limits within a table

See Size Limits for details about size limits that apply to row keys, column families, column qualifiers, and values within a Cloud Bigtable table.

Requesting additional quota

Usage policies

The use of this service must adhere to the Terms of Service as well as Google's Privacy Policy.

Send feedback about...

Cloud Bigtable Documentation