Overview of Replication

Replication for Cloud Bigtable enables you to increase the availability and durability of your data by copying it across multiple zones. You can also isolate workloads by routing different types of requests to different clusters.

This page explains how replication works in Cloud Bigtable and describes some common use cases for replication. It also explains the consistency model that Cloud Bigtable uses when replication is enabled and describes what happens when one cluster fails over to another.

Before you read this page, you should be familiar with the overview of Cloud Bigtable.

How replication works

To use replication in a Cloud Bigtable instance, create a new instance with 2 clusters, or add a second cluster to an existing instance.

When you create an instance with 2 clusters, Cloud Bigtable immediately starts to synchronize your data between the zones where the clusters are located, creating a separate, independent copy of your data in each zone. Similarly, when you add a new cluster to an existing instance, Cloud Bigtable copies your existing data from the original cluster's zone to the new cluster's zone, then synchronizes changes to your data between the two zones.

Cloud Bigtable replicates any changes to your data automatically, including all of the following types of changes:

  • Updates to the data in existing tables
  • New and deleted tables
  • Added and removed column families
  • Changes to a column family's garbage-collection policy

Cloud Bigtable treats each cluster in your instance as a primary cluster, so you can perform reads and writes in each cluster. You can also set up your instance so that requests from different types of applications are routed to different clusters.

Use cases

This section describes some common use cases for Cloud Bigtable replication. To find the best configuration settings for each use case, as well as implementation tips for other use cases, see Examples of Replication Settings.

Isolate serving applications from batch reads

When you use a single cluster to run a batch analytics job that performs numerous large reads, as well as an application that performs a mix of reads and writes, a large batch job can slow things down for the applications' users. With replication, you can route batch analytics jobs to one cluster and application traffic to another cluster, so that batch jobs don't affect your applications' users. Learn more about implementing this use case.

Improve availability

If an instance has only 1 cluster, your data's durability and availability is limited to the zone where that cluster is located. Replication can improve both durability and availability by storing separate copies of your data in 2 zones and automatically failing over between clusters if needed. Learn more about implementing this use case.

Provide near-real-time backup

In some cases—for example, if you can't afford to read stale data—you'll always need to route requests to a single cluster. However, you can still use replication by handling requests with one cluster and keeping another cluster as a near-real-time backup. If the serving cluster becomes unavailable, you can minimize downtime by manually failing over to the backup cluster. Learn more about implementing this use case.

Consistency model

By default, replication for Cloud Bigtable is eventually consistent. This term means that when you write a change to one cluster, you will eventually be able to read that change from the other cluster, but only after the change is replicated between the clusters.

If your instance is healthy, the delay for replication is typically a few seconds or minutes, not hours. However, if you're writing a large amount of data to a cluster, or if a cluster is overloaded or temporarily unavailable, it can take time for replication to catch up. As a result, it's not normally safe to assume that you're always reading the latest value that was written, or that waiting a few seconds after a write gives Cloud Bigtable enough time to replicate the change.

If you need a different consistency guarantee, Cloud Bigtable can also provide read-your-writes consistency when replication is enabled, which ensures that an application will never read data that is older than its most recent writes. To gain read-your-writes consistency for a group of applications, each application in the group must use an app profile that is configured for single-cluster routing, and all of the app profiles must route requests to the same cluster. You can use the second cluster at the same time for other purposes.

Cloud Bigtable can also provide strong consistency when replication is enabled, which ensures that all of your applications see your data in the same state. To gain strong consistency, you use the app-profile configuration for read-your-writes consistency that is described above, but you must not use the second cluster unless you need to fail over to that cluster.

Application profiles

If an instance uses replication, you use application profiles, or app profiles, to control which clusters handle incoming requests from your applications. App profiles also determine whether you can perform single-row transactions, which include read-modify-write operations (including increments and appends) and check-and-mutate operations (also known as conditional mutations or conditional writes).

For details, see Application Profiles. For examples of settings you can use to implement common use cases, see Examples of Replication Settings.

Failovers

If a Cloud Bigtable cluster becomes unresponsive, replication makes it possible for incoming traffic to fail over to the instance's other cluster. Failovers can be either manual or automatic, depending on which app profile an application is using and how the app profile is configured.

For details, see Failovers.

Dropping row ranges when replication is enabled

The Cloud Bigtable Admin API enables you to drop a contiguous range of rows from a table based on their row keys. In instances that do not use replication, Cloud Bigtable can drop a row range quickly and efficiently. However, when replication is enabled, dropping a row range is significantly slower and much less efficient.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Bigtable Documentation