Override Retry, Backoff, and Idempotency Policies
When it is safe to do so, the library automatically retries requests that fail due to a transient error. The library then uses exponential backoff to backoff before trying again. Which operations are considered safe to retry, which errors are treated as transient failures, the details of the exponential backoff algorithm, and for how long the library retries are all configurable via policies.
This document provides examples showing how to override the default policies.
The policies can be set when a DataConnection
, BigtableInstanceAdminConnection
, or BigtableTableAdminConnection
object is created. The library provides default policies for any policy that is not set.
The application can also override some (or all) policies when the corresponding Table
, BigtableInstanceAdminClient
, or BigtableTableAdminClient
object is created. This can be useful if multiple Table
(or *Client
) objects share the same *Connection
object, but you want different retry behavior in some of the clients.
Finally, the application can override some retry policies when calling a specific member function.
The library uses three different options to control the retry loop. The options have per-client names
Configuring the transient errors and retry duration
The *RetryPolicyOption
controls:
- Which errors are to be treated as transient errors.
- How long the library will keep retrying transient errors.
You can provide your own class for this option. The library also provides two built-in policies:
*LimitedErrorCountRetryPolicy
: stops retrying after a specified number of transient errors.*LimitedTimeRetryPolicy
: stops retrying after a specified time.
Note that a library may have more than one version of these classes. Their name match the *Client
and *Connection
object they are intended to be used with. Some *Client
objects treat different error codes as transient errors. In most cases, only kUnavailable is treated as a transient error.
See Also
google::cloud::bigtable::DataRetryPolicyOption
See Also
google::cloud::bigtable::DataRetryPolicy
See Also
google::cloud::bigtable::DataLimitedTimeRetryPolicy
See Also
google::cloud::bigtable::DataLimitedErrorCountRetryPolicy
See Also
google::cloud::bigtable_admin::BigtableTableAdminRetryPolicy
See Also
google::cloud::bigtable_admin::BigtableInstanceAdminRetryPolicy
Controlling the backoff algorithm
The *BackoffPolicyOption
controls how long the client library will wait before retrying a request that failed with a transient error. You can provide your own class for this option.
The only built-in backoff policy is ExponentialBackoffPolicy
. This class implements a truncated exponential backoff algorithm, with jitter. In summary, it doubles the current backoff time after each failure. The actual backoff time for an RPC is chosen at random, but never exceeds the current backoff. The current backoff is doubled after each failure, but never exceeds (or is "truncated") if it reaches a prescribed maximum.
See Also
google::cloud::bigtable::DataBackoffPolicyOption
See Also
google::cloud::bigtable_admin::BigtableTableAdminBackoffPolicyOption
See Also
google::cloud::bigtable_admin::BigtableInstanceAdminBackoffPolicyOption
Controlling which operations are retryable
The *IdempotencyPolicyOption
controls which requests are retryable, as some requests are never safe to retry.
Only one built-in idempotency policy is provided by the library. The name matches the name of the client it is intended for. For example, BigtableTableAdminClient
will use BigtableTableAdminConnectionIdempotencyPolicy
.
In the case of data operations, only mutations need to be considered. The Table
class uses IdempotentMutationPolicy
. Mutations that use server-assigned timestamps are not considered idempotent by default. Mutations that use client-assigned timestamps are idempontent by default.
See Also
google::cloud::bigtable::IdempotentMutationPolicy
See Also
google::cloud::bigtable::IdempotentMutationPolicyOption
See Also
google::cloud::bigtable_admin::BigtableTableAdminConnectionIdempotencyPolicy
See Also
google::cloud::bigtable_admin::BigtableTableAdminConnectionIdempotencyPolicyOption
See Also
google::cloud::bigtable_admin::BigtableInstanceAdminConnectionIdempotencyPolicy
See Also
google::cloud::bigtable_admin::BigtableInstanceAdminConnectionIdempotencyPolicyOption
Example
For example, this will override the retry policies for bigtable::Table
:
namespace cbt = google::cloud::bigtable;
[](std::string const& project_id, std::string const& instance_id,
std::string const& table_id) {
auto options = google::cloud::Options{}
.set<cbt::IdempotentMutationPolicyOption>(
cbt::AlwaysRetryMutationPolicy().clone())
.set<cbt::DataRetryPolicyOption>(
cbt::DataLimitedErrorCountRetryPolicy(3).clone())
.set<cbt::DataBackoffPolicyOption>(
google::cloud::ExponentialBackoffPolicy(
/*initial_delay=*/std::chrono::milliseconds(200),
/*maximum_delay=*/std::chrono::seconds(45),
/*scaling=*/2.0)
.clone());
auto connection = cbt::MakeDataConnection(options);
auto const table_name =
cbt::TableResource(project_id, instance_id, table_id);
// c1 and c2 share the same retry policies
auto c1 = cbt::Table(connection, table_name);
auto c2 = cbt::Table(connection, table_name);
// You can override any of the policies in a new client. This new client
// will share the policies from c1 (or c2) *except* from the retry policy.
auto c3 = cbt::Table(
connection, table_name,
google::cloud::Options{}.set<cbt::DataRetryPolicyOption>(
cbt::DataLimitedTimeRetryPolicy(std::chrono::minutes(5)).clone()));
// You can also override the policies in a single call. In this case, we
// allow no retries.
auto result =
c3.ReadRow("my-key", cbt::Filter::PassAllFilter(),
google::cloud::Options{}.set<cbt::DataRetryPolicyOption>(
cbt::DataLimitedErrorCountRetryPolicy(0).clone()));
(void)result; // ignore errors in this example
}
Follow these links to find examples for other *Client
classes: