如果您使用複製功能搭配多叢集轉送,為應用程式提供高可用性 (HA),則應在多個區域中或附近 Google Cloud ,找出用戶端伺服器或 VM。即使應用程式伺服器不是由 Google Cloud代管,這項建議也適用,因為您的資料會透過最接近應用程式伺服器的 Google Cloud區域進入 Google Cloud 網路。與任何要求一樣,容錯移轉在距離較短時完成的速度會更快。
很多自動容錯移轉的發生相當短暫以至於您沒有注意到。您可以在 Google Cloud 控制台中查看「Automatic Failovers」(自動容錯移轉) 圖表,瞭解指定時間範圍內自動重新轉送的要求數:開啟執行個體清單,按一下執行個體名稱,然後按一下「System insights」(系統洞察)。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eReplication enables traffic failover to another cluster within the same instance if a Bigtable cluster becomes unresponsive, and these failovers can be manual or automatic.\u003c/p\u003e\n"],["\u003cp\u003eManual failovers, used with single-cluster routing, require user judgment based on signals like increased errors, timeouts, or latency, but are not guaranteed to resolve issues, so monitoring is recommended.\u003c/p\u003e\n"],["\u003cp\u003eAutomatic failovers, utilized with multi-cluster routing, are handled by Bigtable, routing traffic to an available cluster when the nearest one is unable to process a request, and also apply to deadline requests.\u003c/p\u003e\n"],["\u003cp\u003eDuring failovers, Bigtable uses a \u003cem\u003elast write wins\u003c/em\u003e algorithm to resolve data conflicts that might occur before replication is completed.\u003c/p\u003e\n"],["\u003cp\u003eFor optimal performance in a multi-cluster routing environment, client servers or VMs should be located near or within multiple Google Cloud regions to ensure faster failovers.\u003c/p\u003e\n"]]],[],null,["Failovers\n\n\nIf a Bigtable cluster becomes unresponsive, replication makes it possible for incoming\ntraffic to fail over to another cluster in the same instance. Failovers can be either manual or\nautomatic, depending on the [app profile](/bigtable/docs/app-profiles) an application\nis using and how the app profile is configured.\n\nThis page explains how manual and automatic failovers work in an instance that\nuses replication.\nTo learn how to complete a failover, see [Managing\nfailovers](/bigtable/docs/managing-failovers).\n\n\nBefore you read this page, you should be familiar with the\n[overview of Bigtable replication](/bigtable/docs/replication-overview).\nYou should also be familiar with the\navailable [routing options](/bigtable/docs/routing).\n\nManual failovers\n\nIf an app profile uses single-cluster routing to direct all requests to one\ncluster, you must use your own judgement to decide when to start failing over\nto a different cluster.\n\nHere are some signals that might indicate that it would be helpful to fail over\nto a different cluster:\n\n- The cluster starts to return a large number of transient system errors.\n- A large number of requests start timing out.\n- The average response latency increases to an unacceptable level.\n\n\n| **Note**: Bigtable reverts to eventual consistency after a failover. If you fail over to a cluster, and there are writes that have not been replicated to that cluster, some reads will return stale data until replication is complete.\n\n\u003cbr /\u003e\n\nBecause these signals can appear for many different reasons, failing over to a\ndifferent cluster is not guaranteed to resolve the underlying issue. [Monitor\nyour instance](/bigtable/docs/monitoring-instance) before and after the failover to verify that\nthe metrics have improved.\n\nFor details about how to complete a manual failover, see [Completing a manual\nfailover](/bigtable/docs/managing-failovers#manual).\n\nAutomatic failovers\n\nIf an app profile uses multi-cluster routing, Bigtable handles\nfailovers automatically. When the nearest cluster is unable to handle a request,\nBigtable routes traffic to the nearest cluster that is available.\n| **Warning:** If a request that contains a SQL statement fails, the request doesn't fail over, even when multi-cluster routing is enabled. This applies to SQL statements sent using a Bigtable client library as well as those run in the Bigtable Studio query editor.\n\nAutomatic failovers can occur even if a cluster is unavailable for a very short\nperiod of time. For example, if Bigtable routes a request to one\ncluster, and that cluster is excessively slow to reply or returns a transient\nerror, Bigtable will typically retry that request on another\ncluster.\n\nIf you are using multi-cluster routing and you send a request with a deadline,\nBigtable automatically fails over when necessary to help meet the\ndeadline. If the request deadline approaches but the initial cluster has not\nsent a response, Bigtable reroutes the request to the next closest\ncluster.\n\nBigtable uses an internal *last write wins* algorithm to handle\nany data conflicts that might occur as a result of failover before replication\nhas completed. See [Conflict resolution](/bigtable/docs/writes#conflict-resolution) for more details.\n\nIf you are using replication with multi-cluster routing to achieve high\navailability (HA) for your application, you should locate your client servers or\nVMs *in or near more than one Google Cloud region*. This recommendation applies\neven if your application server is not hosted by Google Cloud, because\nyour data enters the Google Cloud network through the Google Cloud\nregion that is closest to your application server. Like any request, a failover\ncompletes more quickly over shorter distances.\n\nMany automatic failovers are so brief that you won't notice them. You can check\nthe **Automatic Failovers** graph in the Google Cloud console to see the\nnumber of requests that were automatically rerouted over a given period of time:\nopen the [list of instances](https://console.cloud.google.com/bigtable/instances), click the instance name, then\nclick **System insights**.\n\nWhat's next\n\n- Learn how to [complete a manual failover](/bigtable/docs/managing-failovers#manual).\n- Find out how to [monitor your instance](/bigtable/docs/monitoring-instance)."]]