High availability and replicas

This page explains how Memorystore for Valkey's Cluster architecture supports and provides high availability (HA). This page also explains recommended configurations that contribute to improved instance performance and stability.

High availability

Memorystore for Valkey is built on a highly available architecture where your clients directly access managed Memorystore for Valkey VMs. Your clients do this by connecting to individual shard network addresses, as described in Connect to a Memorystore for Valkey instance.

Connecting directly to shards provides the following benefits:

  • Direct connection avoids any single point of failure because each shard is designed to fail independently. For example, if traffic from multiple clients overloads a slot (keyspace chunk), shard failure limits the impact to the shard responsible for serving the slot.

  • Direct connection avoids intermediate hops, which minimizes round-trip time (client latency) between your client and the Valkey VM.

We recommend creating highly available multi-zone instances as opposed to single-zone instances because of the better reliability they provide. However, if you choose to provision an instance without replicas, we recommend choosing a single-zone instance. For more information, see Choose a single-zone instance if your instance doesn't use replicas.

To enable high availability for your instance, you must provision at least 1 replica node for every shard. You can do this when Creating the instance, or you can Scale the replica count to at least 1 replica per shard. Replicas provide Automatic failover during planned maintenance and unexpected shard failure.

You should configure your client according to the guidance in Client best practices. Using recommended best practices allows your client to automatically and gracefully handle the role (automatic failovers), and slot assignment changes (node replacement, consumer scale out/in) for your instance without any downtime.

Replicas

A highly available Memorystore for Valkey instance is a regional resource. This means that primary and replica VMs of shards are distributed across multiple zones to safeguard against a zonal outage. Memorystore for Valkey supports instances with 0, 1, or 2 replicas per node.

You can use replicas to increase read throughput by scaling reads. To do this, you must use the READONLY command to establish a connection that allows your client to read from replicas.

Instance shape with 0 replicas per node

A Memorystore for Valkey instance with no replicas that has nodes divided evenly across three zones.

Instance shape with 1 replica per node

A Memorystore for Valkey instance with one replica per node, and nodes divided evenly across three zones.

Instance shape with 2 replicas per node

A Memorystore for Valkey instance with two replicas per node, and nodes divided evenly across three zones.

Automatic failover

Automatic failovers within a shard can occur due to maintenance or an unexpected failure of the primary node. During a failover a replica is promoted to be the primary. You can configure replicas explicitly. The service can also temporarily provision extra replicas during internal maintenance to avoid any downtime.

Automatic failovers prevent data-loss during maintenance updates. For details about automatic failover behavior during maintenance, see Automatic failover behavior during maintenance.

Failover and node repair duration

Automatic failovers can take time on the order of tens of seconds for unplanned events such as a primary node process crash, or a hardware failure. During this time the system detects the failure, and elects a replica to be the new primary.

Node repair can take time on the order of minutes for the service to replace the failed node. This is true for all primary and replica nodes. For instances that aren't highly available (no replicas provisioned), repairing a failed primary node also takes time on the order of minutes.

Client behavior during an unplanned failover

Client connections are likely to be reset depending on the nature of the failure. After automatic recovery, connections should be retried with exponential backoff to avoid overloading primary and replica nodes.

Clients using replicas for read throughput should be prepared for a temporary degradation in capacity until the failed node is automatically replaced.

Lost writes

During a failover resulting from an unexpected failure, acknowledged writes may be lost due to the asynchronous nature of Valkey's replication protocol.

Client applications can leverage the Valkey WAIT command to improve real world data safety.