Replication overview

Replication for Bigtable lets you increase the availability and durability of your data by copying it across multiple regions or multiple zones within the same region. You can also isolate workloads by routing different types of requests to different clusters.

This page explains how replication works in Bigtable and describes some common use cases for replication. It also explains the consistency model that Bigtable uses when replication is enabled and describes what happens when one cluster fails over to another.

Before you read this page, you should be familiar with the overview of Bigtable.

How replication works

To use replication in a Bigtable instance, create a new instance with more than 1 cluster or add clusters to an existing instance.

Bigtable instances can have clusters located in up to 8 Bigtable regions, and in each of those 8 regions, the instance can contain only one cluster per zone. For example, if you create an instance in 8 regions that each have 3 zones, your instance can have up to 24 clusters.

Each zone in a region can contain only one cluster. Having clusters in different zones or regions lets you access your instance's data even if one Google Cloud zone or region becomes unavailable.

When you create an instance with more than one cluster, Bigtable immediately starts to synchronize your data between the clusters, creating a separate, independent copy of your data in each zone where your instance has a cluster. Similarly, when you add a new cluster to an existing instance, Bigtable copies your existing data from the original cluster's zone to the new cluster's zone, then synchronizes changes to your data between the zones.

Bigtable replicates any changes to your data, including all of the following types of changes:

  • Updates to the data in existing tables
  • New and deleted tables
  • Added and removed column families
  • Changes to a column family's garbage collection policy

It's important to note that replication has some latency, and consistency between replicas is eventual.

Bigtable treats each cluster in your instance as a primary cluster, so you can perform reads and writes in each cluster. You can also set up your instance so that requests from different types of applications are routed to different clusters.

Before you add clusters to an instance, you should be aware of the restrictions that apply when you change garbage collection policies for replicated tables.

Performance

Using replication has performance implications that you should plan for when you create a replicated instance or enable replication by adding a cluster to a single-cluster instance. For example, replicated clusters in different regions typically have higher replication latency than replicated clusters in the same region. Additionally, clusters in instances that have more than one cluster often need more nodes to handle the added work of handling replication. To learn more, see Understand performance.

Use cases

This section describes some common use cases for Bigtable replication. To find the best configuration settings for each use case, as well as implementation tips for other use cases, see Examples of Replication Settings.

Isolate serving applications from batch reads

When you use a single cluster to run a batch analytics job that performs numerous large reads alongside an application that performs a mix of reads and writes, the large batch job can slow things down for the application's users. With replication, you can use app profiles with single-cluster routing to route batch analytics jobs and application traffic to different clusters, so that batch jobs don't affect your applications' users. Learn more about implementing this use case.

Improve availability

If an instance has only 1 cluster, your data's durability and availability are limited to the zone where that cluster is located. Replication can improve both durability and availability by storing separate copies of your data in multiple zones or regions and automatically failing over between clusters if needed. Learn more about implementing this use case.

Provide near-real-time backup

In some cases—for example, if you can't afford to read stale data—you'll always need to route requests to a single cluster. However, you can still use replication by handling requests with one cluster and keeping another cluster as a near-real-time backup. If the serving cluster becomes unavailable, you can minimize downtime by manually failing over to the backup cluster. Learn more about implementing this use case.

Ensure your data has a global presence

You can set up replication in locations across the world to put your data closer to your customers. For example, you can create an instance with replicated clusters in the US, Europe, and Asia and set up app profiles to route application traffic to the nearest cluster. Learn more about implementing this use case.

Consistency model

By default, replication for Bigtable is eventually consistent. This term means that when you write a change to one cluster, you will eventually be able to read that change from the other clusters in the instance, but only after the change is replicated among the clusters.

If your instance is responsive, the latency for replication is typically a few seconds or minutes, not hours. However, if you're writing a large amount of data to a cluster, or if a cluster is overloaded or temporarily unavailable, it can take time for replication to catch up. Also, replication can take more time if your clusters are far apart. As a result, it's not normally safe to assume that you're always reading the latest value that was written, or that waiting a few seconds after a write gives Bigtable enough time to replicate the change.

If you need a different consistency guarantee, Bigtable can also provide read-your-writes consistency when replication is enabled, which ensures that an application will never read data that is older than its most recent writes. To gain read-your-writes consistency for a group of applications, each application in the group must use an app profile that is configured for single-cluster routing, and all of the app profiles must route requests to the same cluster. You can use the instance's additional clusters at the same time for other purposes.

For some replication use cases, Bigtable can also provide strong consistency, which ensures that all of your applications see your data in the same state. To gain strong consistency, you use the single-cluster routing app-profile configuration for read-your-writes consistency that is described earlier, but you must not use the instance's additional clusters unless you need to fail over to a different cluster. Review the examples of replication settings to see if this is possible for your use case.

Conflict resolution

Each cell value in a Bigtable table is uniquely identified by the four-tuple (row key, column family, column qualifier, timestamp). See Bigtable storage model for more details on these identifiers. In the rare event that two writes with the exact same four-tuple are sent to two different clusters, Bigtable automatically resolves the conflict using an internal last write wins algorithm based on the server-side time. The Bigtable "last write wins" implementation is deterministic, and when replication catches up, all clusters have the same value for the four-tuple.

Application profiles

If an instance uses replication, you use application profiles, or app profiles, to specify routing policies. App profiles also determine whether you can perform single-row transactions, which include read-modify-write operations (including increments and appends) and check-and-mutate operations (also known as conditional mutations or conditional writes).

For details, see Application Profiles. For examples of settings you can use to implement common use cases, see Examples of Replication Settings.

Routing policies

Every app profile has a routing policy that controls which clusters handle incoming requests from your applications. Options for routing policies include the following:

  • Single-cluster routing: Sends all requests to a single cluster that you specify.
  • Multi-cluster routing to any cluster: Sends requests to the nearest available cluster in the instance.
  • Cluster group routing: Sends requests to the nearest available cluster within a cluster group that you specify in the app profile settings.

Failovers

If a Bigtable cluster becomes unresponsive, replication makes it possible for incoming traffic to fail over to another cluster in the same instance. Failovers can be either manual or automatic, depending on the app profile an application is using and how the app profile is configured.

For details, see Failovers.

Dropping row ranges when replication is enabled

The Cloud Bigtable Admin API lets you to drop a contiguous range of rows from a table based on their row keys. In instances that don't use replication, Bigtable can drop a row range quickly and efficiently. However, when replication is enabled, dropping a row range is significantly slower and much less efficient.

What's next