High availability and replicas

This page explains how Memorystore for Redis Cluster architecture supports and provides high availability (HA). This page also explains recommended configurations that contribute to improved instance performance and stability.

High availability

Memorystore for Redis Cluster is built on a highly available architecture where your clients directly access managed Memorystore for Redis Cluster VMs. Your clients do this by connecting to individual shard network addresses, as described in Connect to a Memorystore for Redis Cluster instance.

Connecting directly to shards provides the following benefits:

  • Direct connection avoids any single point of failure because each shard is designed to fail independently. For example, if traffic from multiple clients overloads a slot (keyspace chunk), shard failure limits the impact to the shard responsible for serving the slot.

  • Direct connection avoids intermediate hops, which minimizes round-trip time (client latency) between your client and the Redis VM.

To enable high availability for your instance, you must provision at least 1 replica node for every shard. You can do this when Creating the instance, or you can Scale the replica count to at least 1 replica per shard. Replicas provide Automatic failover during planned maintenance and unexpected shard failure.

You should configure your client according to the guidance in Redis client best practices. Using recommended best practices allows your OSS Redis client to automatically and gracefully handle the role (automatic failovers), and slot assignment changes (node replacement, consumer scale out/in) for your cluster without any downtime.

Replicas

A highly available Memorystore for Redis Cluster instance is a regional resource. This means that primary and replica VMs of shards are distributed across multiple zones to safeguard against a zonal outage. Memorystore for Redis Cluster supports instances with 0, 1, or 2 replicas per node.

You can use replicas to increase read throughput by scaling reads. To do this, you must use the READONLY command to establish a connection that allows your client to read from replicas. For more details about reading from replicas, see Scaling reads using replica nodes.

Cluster shape with 0 replicas per node

A Memorystore Cluster for Redis instance with no replicas that has nodes divided evenly across three zones.

Cluster shape with 1 replica per node

A Memorystore Cluster for Redis instance with one replica per node, and nodes divided evenly across three zones.

Cluster shape with 2 replicas per node

A Memorystore Cluster for Redis instance with two replicas per node, and nodes divided evenly across three zones.

Automatic failover

Automatic failovers within a shard can occur due to maintenance or an unexpected failure of the primary node. During a failover a replica is promoted to be the primary. You can configure replicas explicitly. The service can also temporarily provision extra replicas during internal maintenance to avoid any downtime.

Automatic failovers prevent data-loss during maintenance updates. For details about automatic failover behavior during maintenance, see Automatic failover behavior during maintenance.

Failover and node repair duration

Automatic failovers can take time on the order of tens of seconds for unplanned events such as a primary node process crash, or a hardware failure. During this time the system detects the failure, and elects a replica to be the new primary.

Node repair can take time on the order of minutes for the service to replace the failed node. This is true for all primary and replica nodes. For instances that aren't highly available (no replicas provisioned), repairing a failed primary node also takes time on the order of minutes.

Client behavior during an unplanned failover

Client connections are likely to be reset depending on the nature of the failure. After automatic recovery, connections should be retried with exponential backoff to avoid overloading primary and replica nodes.

Clients using replicas for read throughput should be prepared for a temporary degradation in capacity until the failed node is automatically replaced.

Lost writes

During a failover resulting from an unexpected failure, acknowledged writes may be lost due to the asynchronous nature of Redis's replication protocol.

Client applications can leverage the Redis WAIT command to improve real world data safety. It is a best-effort approach that comes with trade-offs as explained in Redis WAIT command documentation.