Scaling

Scaling a cluster is the process of adding or removing nodes from a cluster in response to changes in the cluster's workload or data storage needs.

You can scale a Bigtable cluster in the following ways:

In most cases, choose autoscaling. When you enable autoscaling for a cluster, Bigtable continuously monitors the cluster and automatically adjusts the number of nodes based on your settings.

You can scale your Bigtable cluster based on metrics such as the cluster's CPU usage. For example, if your cluster is under heavy load and its CPU utilization is high, you can add nodes to the cluster until its CPU usage drops. You can also save money by removing nodes from the cluster when it is not being used heavily.

Node scaling factor

When you create a Bigtable cluster, you have the option to configure the cluster with a 2x node scaling factor. When you choose this configuration, Bigtable treats two standard nodes as a larger, single compute node, and the cluster is always scaled in increments of two nodes. As a result, there are fewer compute boundaries between nodes in the cluster. Depending on the workload, the benefits of 2x node scaling include the following:

  • Improved throughput and tail latency stability
  • Greater ability to absorb hotspots

You can create a cluster with 2x node scaling factor enabled when you use the Google Cloud console or the gcloud CLI.

You can configure 2x node scaling with autoscaling or manual node allocation.

For limitations, see Node scaling factor limitations.

Small clusters

2x node scaling is optimal for larger workloads. If you are considering changing from standard node scaling (by a factor of one) to 2x node scaling, consider the cost implications. For a smaller workload, such as one that runs on a cluster with one node, using 2x node scaling costs twice as much. Similarly, using 2x node scaling for a workload that previously was run on a cluster with 3 nodes increases costs by 33%.

On the other hand, for a workload that previously ran on a large cluster, such as a cluster with 50 nodes, the effect of a 2x node scaling factor is small relative to the number of nodes.

Bigtable returns an error if you try to create a cluster with 2x node scaling factor in an unsupported zone.

Limitations

Cluster scaling is subject to node availability, takes time to complete, can't compensate for an inappropriate schema design, and must be done gradually. The following sections describe these limitations, as well as limitations that apply to 2x node scaling.

Node availability

Node quotas apply whether a cluster has manual node allocation or autoscaling enabled. See Quotas and node availability for details.

Delay while nodes rebalance

After you add nodes to a cluster, it can take up to 20 minutes under load before you see a significant improvement in the cluster's performance. As a result, if your workload involves short bursts of high activity, adding nodes to your cluster based on CPU load doesn't improve performance — by the time Bigtable rebalances your data, the short burst of activity will be over.

To plan for this delay, you can add nodes to your cluster, either programmatically or through the Google Cloud console, before you increase the load on the cluster. This approach gives Bigtable time to rebalance your data across the additional nodes before the workload increases. On clusters that use manual node allocation, change the number of nodes. On clusters that use autoscaling, change the minimum number of nodes. After your traffic returns to normal, change your node settings back.

Latency increases caused by scaling down too quickly

When you decrease the number of nodes in a cluster to scale down, try not to reduce the cluster size by more than 10% in a 10-minute period. Scaling down too quickly can cause performance problems, such as increased latency, if the remaining nodes in the cluster become temporarily overwhelmed.

Schema design issues

If there are problems with the schema design for your table, adding nodes to your Bigtable cluster may not improve performance. For example, if you have a large number of reads or writes to a single row in your table, all of the reads or writes will go to the same node in your cluster; as a result, adding nodes doesn't improve performance. In contrast, if reads and writes are evenly distributed across rows in your table, adding nodes will generally improve performance.

See Designing Your Schema for details about how to design a schema that lets Bigtable scale effectively.

Node scaling factor limitations

You can't convert a cluster with standard node scaling to use 2x node scaling; you must create a new cluster and enable 2x node scaling at creation time. For more information on adding a cluster to an instance, see Modify an instance.

You can't configure 2x node scaling for an HDD cluster.

You can create clusters configured with 2x node scaling in every Bigtable region, but not in every zone. The following zones can't contain a cluster with 2x node scaling:

  • asia-south1-c
  • europe-central2-c
  • me-central2-b
  • me-central2-c
  • northamerica-northeast1-a
  • northamerica-northeast1-b
  • southamerica-east1-c
  • us-south1-b
  • us-south1-c

What's next