Routing options
When you send requests from an application to Bigtable, you use an app profile that tells Bigtable how to handle the requests. An app profile specifies the routing policy for the requests. For instances that use replication, the routing policy controls which clusters receive the requests and how failovers are handled.
This document describes the routing policies that are available for a standard app profile.
Routing policies are especially important for workload isolation use cases, when you're not able to use Data Boost (Preview). You can configure them in conjunction with request priorities.
Routing policies don't affect replication, but you should be familiar with how Bigtable replication works before you read this page. You should also read Failovers.
Single-cluster routing
A single-cluster routing policy routes all requests to one cluster in your instance. If that cluster becomes unavailable, you must manually fail over to another cluster.
This is the only routing policy that lets you enable single-row transactions.
A replicated instance normally provides eventual consistency. However, you can achieve read-your-writes consistency for a workload in a replicated instance if you configure an app profile for that workload to use single-cluster routing to send read and write requests to the same cluster. You can route traffic for additional workloads on the replicated instance to other clusters in the instance depending on your workload requirements.
Multi-cluster routing
A multi-cluster routing policy routes requests that you send to an instance to the nearest region that the instance has a cluster in. If the cluster becomes unavailable, traffic automatically fails over to the nearest cluster that is available.
This configuration provides eventual consistency. You can't enable single-row transactions with multi-cluster routing, because single-row transactions can cause data conflicts when you use multi-cluster routing. For details, see Single-row transactions.
Use multi-cluster routing if you want high availability (HA). For recommended instance configurations and further details, see Create high availability (HA).
The two types of multi-cluster routing are any cluster and cluster group.
Any cluster routing
Any cluster routing makes every cluster in the instance available to receive requests and for failover.
Cluster group routing
If you want to exclude one or more of an instance's clusters from possible failover, you can use cluster group routing. This form of multi-cluster routing lets you specify a subset of clusters that an app profile can send traffic to. This can be helpful if you want to reserve a cluster for a separate workload.
Row-affinity routing
Row-affinity routing automatically routes single-row read and write requests to a specific cluster based on the row key of the request.
If you want multi-cluster routing to achieve a higher rate of read-your-writes
consistency, and most of your requests are single-row operations, you can use
row-affinity routing (sticky routing). To enable row-affinity routing, use a
custom app profile with the --row-affinity
flag enabled.
Bigtable uses the row key of the request to automatically
determine which cluster to route the request to. You cannot manually set the
mapping between the row key and the cluster.
Row-affinity routing can be used only for single-row read or write requests.
This includes requests that call ReadRows
with one key specified, MutateRow
,
and MutateRows
with one key specified, and BulkMutateRow
with one key
specified.
Read-your-writes consistency is not fully achieved with row-affinity routing in the following cases:
Adding a cluster to the instance: Row-affinity routing determines which cluster to route to based on the row key. If a new cluster is added or removed to the instance while row-affinity routing is enabled, the row key assignment might change. To ensure that cluster failover order remains the same despite changes to the instance's cluster list, we recommend using cluster groups by setting the
--restrict-to
flag.With cluster groups, you cannot delete a cluster in an instance while it is in use by an app profile. Additionally, any new cluster added to the instance doesn't start receiving requests unless it is explicitly added to the app profile's cluster group.
Failover: If a cluster is unavailable or unhealthy, requests to the impacted cluster are directed to the next cluster according to the failover order. This rerouting can impact consistency.
For more information about failovers, see Failovers. To learn how to complete a failover, see Managing failovers.
Single-row transactions
In Bigtable mutations, such as read, write, and delete requests, are always atomic at the row level. This includes mutations to multiple columns in a single row, as long as they are included in the same mutation operation. Bigtable does not support transactions that atomically update more than one row.
However, Bigtable supports some write operations that would require a transaction in other databases. In effect, Bigtable uses single-row transactions to complete these operations. These operations include reads and writes, and all of the reads and writes are executed atomically, but the operations are still atomic only at the row level:
- Read-modify-write operations, including increments and appends. A read-modify-write operation reads an existing value; increments or appends to the existing value; and writes the updated value to the table.
- Check-and-mutate operations, also known as conditional mutations or conditional writes. In a check-and-mutate operation, Bigtable checks a row to see if it meets a specified condition. If the condition is met, Bigtable writes new values to the row.
Conflicts between single-row transactions
Every cluster in a Bigtable instance is a primary cluster that accepts both reads and writes. As a result, operations that require single-row transactions can cause problems in replicated instances.
If your use case allows it, you can avoid these conflicts by using aggregates. When you send an add request to an aggregate field, the new value is merged with the existing value. Aggregates let you keep a running sum or counter. For more information, see Aggregate values at write time.
To illustrate the problem that can arise when you don't use aggregates, suppose you have a table that you use to store data for a ticketing system. You use an integer counter to store the number of tickets that have been sold. Each time you sell a ticket, your app sends a read-modify-write operation to increment the counter by 1.
If your instance has one cluster, client apps can sell tickets at the same time and increment the counters without data loss because the requests are handled atomically in the order they are received by that single cluster.
On the other hand, if your instance has multiple clusters and your app profile were to allow multi-cluster routing, simultaneous requests to increment the counter might each be sent to different clusters, then replicated to the other clusters in the instance. If you send two increment requests at the same time that are routed to different clusters, each finishes its transaction without "knowing" about the other. The counter on each cluster is incremented by one. When the data is replicated to the other cluster, Bigtable can't possibly know that you meant to increment by 2.
To help you avoid unintended results, Bigtable does the following:
- Requires each app profile to specify whether it allows single-row transactions.
- Prevents you from enabling single-row transactions in an app profile that uses multi-cluster routing, because there's no safe way to enable both of these features at once.
- Warns you if you enable single-row transactions in two or more different app profiles that use single-cluster routing and point to different clusters. If you choose to create this type of configuration, you must ensure that you don't send conflicting read-modify-write or check-and-mutate requests to different clusters.
What's next
- Review examples of replication settings.
- Learn how to manage failovers.
- Change an app profile's routing policy.