[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eEnabling replication in Bigtable can enhance read throughput and reduce latency, especially with multi-cluster routing, by placing data closer to users.\u003c/p\u003e\n"],["\u003cp\u003eWrite throughput does not increase with replication; in fact, it may decrease as each cluster must perform additional work to replicate data to other clusters.\u003c/p\u003e\n"],["\u003cp\u003eMulti-cluster routing can significantly reduce latency by automatically directing requests to the nearest cluster, whereas single-cluster routing does not offer this benefit.\u003c/p\u003e\n"],["\u003cp\u003eClusters in a multi-cluster instance generally require more nodes than those in a single-cluster instance with similar traffic, due to the added workload of replication.\u003c/p\u003e\n"],["\u003cp\u003eDropping a row range in a replicated instance should be avoided when possible, as it is a slow operation that increases CPU usage.\u003c/p\u003e\n"]]],[],null,["Replication and performance\n\nEnabling replication affects the performance of a Bigtable\ninstance. The effect is positive for some metrics and negative for others. You\nshould understand potential impacts on performance before deciding to enable\nreplication.\n\nRead throughput\n\nReplication can improve read throughput, especially when you use multi-cluster\nrouting. Additionally, replication can reduce read latency by placing your\nBigtable data geographically closer to your application's users.\n\nWrite throughput\n\nAlthough replication can improve availability and read performance, **it does\nnot increase write throughput**. A write to one cluster must be replicated to\nall other clusters in the instance. As a result, each cluster is expending CPU\nresources to pull changes from the other clusters. Write throughput might\nactually go down because replication requires each cluster to do additional\nwork.\n\nFor example, suppose you have a single-cluster instance, and the cluster has 3\nnodes:\n\nIf you add nodes to the cluster, the effect on write throughput is different\nthan if you enable replication by adding a second 3-node cluster to the\ninstance.\n\n**Adding nodes to the original cluster**: You can add 3 nodes to the cluster,\nfor a total of 6 nodes. The write throughput for the instance doubles, but the\ninstance's data is available in only one zone:\n\n**With replication**: Alternatively, you can add a second cluster with 3\nnodes, for a total of 6 nodes. The instance now writes each piece of data twice:\nwhen the write is first received and again when it is replicated to the other\ncluster. The write throughput does not increase, and might go down, but you\nbenefit from having your data available in two different zones:\n\nIn these examples, the single-cluster instance can handle twice the write\nthroughput that the replicated instance can handle, even though each instance's\nclusters have a total of 6 nodes.\n\nReplication latency\n\nWhen you use multi-cluster routing, replication for Bigtable is\n[eventually consistent](/bigtable/docs/replication-overview#consistency-model). As a general rule, it takes\nlonger to replicate data across a greater distance. Replicated clusters in\ndifferent regions will typically have higher replication latency than\nreplicated clusters in the same region.\n\nNode usage\n\nAs explained in [Write throughput](#performance-write-throughput), when an\ninstance uses replication, each cluster in the instance must handle the work of\nreplication in addition to the load it receives from applications. For this\nreason, a cluster in a multi-cluster instance often needs more nodes\nthan a cluster in a single-cluster instance with similar traffic.\n\nApp profiles and traffic routing\n\nDepending on your use case, you will use one or more app profiles to route your\nBigtable traffic. Each app profile uses either multi-cluster or\nsingle-cluster routing. The choice of routing can affect performance.\n\nMulti-cluster routing can minimize latency. An app profile with multi-cluster\nrouting automatically routes requests to the closest cluster in an instance from\nthe perspective of the application, and the writes are then replicated to the\nother clusters in the instance. This automatic choice of the shortest distance\nresults in the lowest possible latency.\n\nAn app profile that uses single-cluster routing can be optimal for certain use\ncases, like separating workloads or to have read-after-write semantics on a\nsingle cluster, but it won't reduce latency in the way multi-cluster routing\ndoes.\n\nTo understand how to configure your app profiles for these and other use cases,\nsee [Examples of Replication Settings](/bigtable/docs/replication-settings).\n\nDropping row ranges\n\nIf possible, [avoid dropping a row range](/bigtable/docs/replication-overview#drop-row-range) in an\ninstance that uses replication because the operation is slow and the CPU usage\nincreases during the operation.\n\nWhat's next\n\n- Read about [Failovers](/bigtable/docs/failovers).\n- Explore [Routing options](/bigtable/docs/routing)."]]