Quotas & Limits

This page identifies the quotas and limits for Cloud Bigtable.

Quotas

This section describes default quotas that apply to all of your Cloud Bigtable usage.

Operation quotas

The following quotas affect the number of Cloud Bigtable administrative operations that you can perform within a given time. To request an increase for these quotas, contact Google Cloud Platform Support.

Daily quotas reset at midnight Pacific Time.

Name Description Default quota
Instances and clusters
Instance and cluster read requests Reading the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or reading a list of tables

Per day per project: 864,000 ops (average of 10 ops/second)

Per 100 seconds per user: 1,000 ops

Instance and cluster write requests Changing the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or creating a new table

Per day per project: 500 ops

Per 100 seconds per user: 100 ops

Application profiles
App profile read requests Reading the configuration for an app profile

Per 100 seconds per project: 5,000 ops

Per 100 seconds per user: 1,000 ops

App profile write requests Changing the configuration for an app profile

Per 100 seconds per project: 500 ops

Per 100 seconds per user: 100 ops

Tables
Table read requests Reading the configuration for a table (for example, details about its column families)

Per day per project: 864,000 ops (average of 10 ops/second)

Per 100 seconds per user: 1,000 ops

Table write requests Changing the configuration for a table (for example, the garbage collection settings for a column family)

Per day per project: 5,000 ops

Per 100 seconds per user: 100 ops

DropRowRange method Delete a range of rows from a table in a single operation.

Per day per project: 5,000 ops

Per 100 seconds per user: 100 ops

Cloud Identity and Access Management
Fine-grained ACL get requests Reading information about the Cloud IAM policy for a Cloud Bigtable instance, or testing the Cloud IAM permissions for an instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per 100 seconds per user: 1,000 ops

Fine-grained ACL set requests Changing the Cloud IAM policy for a Cloud Bigtable instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per 100 seconds per user: 1,000 ops

Node quotas

A GCP project contains Cloud Bigtable instances, which are containers for clusters. A cluster represents the actual Cloud Bigtable service running in a single zone. Clusters contain nodes, which are compute resources that enable Cloud Bigtable to manage your data.

By default, you can provision up to 30 Cloud Bigtable nodes per zone in each GCP project. If you need to provision more nodes than the default limit, use the node request form.

Limits

This section describes limits that apply to your usage of Cloud Bigtable. Limits are built into the service and cannot be changed.

Storage per node

If a cluster does not have enough nodes, based on its current workload and the amount of data it stores, Cloud Bigtable will not have enough CPU resources to manage all of the tablets that are associated with the cluster. Cloud Bigtable will also not be able to perform essential maintenance tasks in the background. As a result, the cluster may not be able to handle incoming requests, and latency will go up.

To prevent these issues, monitor storage utilization for your clusters to make sure they have enough nodes to support the amount of data in the cluster, based on the following limits:

  • SSD clusters: 2.5 TB per node
  • HDD clusters: 8 TB per node

These values are measured in binary terabytes (TB), where 1 TB is 240 bytes. This unit of measurement is also known as a tebibyte (TiB).

As a best practice, add enough nodes to your cluster so you are only using 70% of these limits, which helps accommodate any sudden spikes in storage usage. For example, if you are storing 50 TB of data in a cluster that uses SSD storage, you should provision at least 29 nodes, which will handle up to 72.5 TB of data. If you are not adding significant amounts of data to the cluster, you can exceed this recommendation and store up to 100% of the limit.

Data size within tables

As a best practice, design your schema to keep the size of your data below these recommended limits:

  • A single row key: 4 KB
  • Column families per table: 100
  • A single column qualifier: 16 KB
  • A single value in a table cell: 10 MB
  • All values in a single row: 100 MB

In addition, you must ensure that your data fits within these hard limits:

  • A single value in a table cell: 100 MB
  • All values in a single row: 256 MB

These size limits are measured in binary kilobytes (KB), where 1 KB is 210 bytes, and binary megabytes (MB), where 1 MB is 220 bytes. These units of measurement are also known as kibibytes (KiB) and mebibytes (MiB).

Tables per instance

Cloud Bigtable supports a maximum of 1,000 tables in each instance.

Application profiles per instance

Cloud Bigtable supports a maximum of 2,000 application profiles in each instance.

Operation limits

When you send multiple mutations to Cloud Bigtable as a single batch, you can include no more than 100,000 mutations in the batch.

Usage policies

The use of this service must adhere to the Terms of Service as well as Google's Privacy Policy.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Bigtable Documentation