您新增節點至叢集後,可能需要 20 分鐘進行載入,叢集效能才會大幅提升。因此,如果工作負載涉及短時間內的高活動量,根據 CPU 負載在叢集中新增節點並不會提升效能,因為 Bigtable 重新平衡資料時,短時間內的高活動量已結束。
為因應這段延遲時間,您可以在叢集上增加節點,方法是透過程式輔助或 Google Cloud 控制台,先增加叢集負載。這樣一來,Bigtable 就能在工作負載增加前,有時間在額外節點間重新平衡資料。在手動節點分配的叢集上,變更節點數量。在採用自動調度資源的叢集上,變更節點數量下限。流量恢復正常後,請將節點設定改回原狀。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eScaling a Bigtable cluster involves adding or removing nodes to adjust to workload or storage needs, with autoscaling generally being the preferred method.\u003c/p\u003e\n"],["\u003cp\u003eBigtable offers a 2x node scaling factor option, which combines two standard nodes into a single compute node, enhancing throughput and stability, although it may increase costs for smaller workloads.\u003c/p\u003e\n"],["\u003cp\u003eScaling can be done manually or via autoscaling, and adding nodes before workload increases can help mitigate the up to 20 minute rebalancing delay and latency issues.\u003c/p\u003e\n"],["\u003cp\u003eRapidly decreasing nodes in a cluster by more than 10% in a 10-minute period may cause increased latency, therefore it is advised to scale down gradually.\u003c/p\u003e\n"],["\u003cp\u003e2x node scaling has specific limitations, including the inability to convert an existing cluster, not being supported for HDD clusters, and not being available in certain zones.\u003c/p\u003e\n"]]],[],null,["Scaling\n\n*Scaling* a cluster is the process of adding or removing nodes from a cluster in\nresponse to changes in the cluster's workload or data storage needs.\n\nYou can scale a Bigtable cluster in the following ways:\n\n- [Autoscaling](/bigtable/docs/autoscaling)\n- Manual node allocation\n\n**In most cases, choose autoscaling.** When you enable autoscaling for a\ncluster, Bigtable continuously monitors the cluster and\nautomatically adjusts the number of nodes based on your settings.\n\nYou can scale your Bigtable cluster based on metrics such as\nthe cluster's CPU usage. For example, if your cluster is under heavy load and\nits CPU utilization is high, you can add nodes to the cluster until its CPU\nusage drops. You can also save money by removing nodes from the cluster when it\nis not being used heavily.\n\nNode scaling factor\n\nWhen you create a Bigtable cluster, you have the option to\nconfigure the cluster with a **2x node scaling factor**. When you choose this\nconfiguration, Bigtable treats two standard nodes as a larger,\nsingle compute node, and the cluster is always scaled in increments of two\nnodes. As a result, there are fewer compute boundaries between nodes in the\ncluster. Depending on the workload, the benefits of 2x node scaling include the\nfollowing:\n\n- Improved throughput and tail latency stability\n- Greater ability to absorb hotspots\n\nYou can create a cluster with 2x node scaling factor enabled when you use the\nGoogle Cloud console or the gcloud CLI.\n\nYou can configure 2x node scaling with autoscaling or manual node\nallocation.\n\nFor limitations, see [Node scaling factor\nlimitations](#node-scaling-limitations).\n\nSmall clusters\n\n2x node scaling is optimal for larger workloads. If you are considering changing\nfrom standard node scaling (by a factor of one) to 2x node scaling, consider the\ncost implications. For a smaller workload, such as one that runs on a cluster\nwith one node, using 2x node scaling costs twice as much. Similarly, using 2x\nnode scaling for a workload that previously was run on a cluster with 3 nodes\nincreases costs by 33%.\n\nOn the other hand, for a workload that previously ran on a large cluster, such\nas a cluster with 50 nodes, the effect of a 2x node scaling factor is small\nrelative to the number of nodes.\n\nBigtable returns an error if you try to create a cluster with\n2x node scaling factor in an unsupported zone.\n\nLimitations\n\nCluster scaling is subject to node availability, takes time to complete, can't\ncompensate for an inappropriate schema design, and must be done gradually. The\nfollowing sections describe these limitations, as well as limitations that\napply to 2x node scaling.\n\nNode availability\n\nNode quotas apply whether a cluster has manual node allocation or autoscaling\nenabled. See [Quotas and node availability](/bigtable/quotas#availability) for details.\n\nDelay while nodes rebalance\n\nAfter you add nodes to a cluster, it can take up to 20 minutes under load before\nyou see a significant improvement in the cluster's performance. As a result, if\nyour workload involves short bursts of high activity, adding nodes to your\ncluster based on CPU load doesn't improve performance --- by the time\nBigtable rebalances your data, the short burst of activity will be\nover.\n\nTo plan for this delay, you can add nodes to your cluster, either\nprogrammatically or through the Google Cloud console, *before* you\nincrease the load on the cluster. This approach gives Bigtable\ntime to rebalance your data across the additional nodes before the workload\nincreases. On clusters that use manual node allocation, change the number of\nnodes. On clusters that use autoscaling, change the [minimum number of\nnodes](/bigtable/docs/autoscaling#parameters). After your traffic returns to normal, change\nyour node settings back.\n\nLatency increases caused by scaling down too quickly\n\nWhen you decrease the number of nodes in a cluster to scale down, try not to\nreduce the cluster size by more than 10% in a 10-minute period. Scaling down too\nquickly can cause performance problems, such as increased latency, if the\nremaining nodes in the cluster become temporarily overwhelmed.\n\nSchema design issues\n\nIf there are problems with the schema design for your table, adding nodes to\nyour Bigtable cluster may not improve performance. For example, if\nyou have a large number of reads or writes to a single row in your table, all of\nthe reads or writes will go to the same node in your cluster; as a result,\nadding nodes doesn't improve performance. In contrast, if reads and writes are\nevenly distributed across rows in your table, adding nodes will generally\nimprove performance.\n\nSee [Designing Your Schema](/bigtable/docs/schema-design) for details about how to design a\nschema that lets Bigtable scale effectively.\n\nNode scaling factor limitations\n\nYou can't convert a cluster with standard node scaling to use 2x node scaling;\nyou must create a new cluster and enable 2x node scaling at creation time. For\nmore information on adding a cluster to an instance, see [Modify an\ninstance](/bigtable/docs/modifying-instance).\n\nYou can't configure 2x node scaling for an HDD cluster.\n\nYou can create clusters configured with 2x node scaling in every\nBigtable region, but not in every zone. The following zones can't\ncontain a cluster with 2x node scaling:\n\n- asia-south1-c\n- europe-central2-c\n- me-central2-b\n- me-central2-c\n- northamerica-northeast1-a\n- northamerica-northeast1-b\n- southamerica-east1-c\n- us-south1-b\n- us-south1-c\n\nWhat's next\n\n- Learn about [Bigtable autoscaling](/bigtable/docs/autoscaling).\n- Find out how you can [monitor your instance](/bigtable/docs/monitoring-instance), both programmatically and through the Google Cloud console."]]