This page explains how Memorystore for Valkey's architecture supports and provides high availability (HA). This page also explains recommended configurations that contribute to improved instance performance and stability.
High availability
Memorystore for Valkey is built on a highly available architecture where your clients access managed Memorystore for Valkey nodes directly. Your clients do this by connecting to individual endpoints, as described in Connect to a Memorystore for Valkey instance.
Connecting to shard(s) directly provides the following benefits:
Direct connection avoids intermediate hops, which minimizes the round-trip time (client latency) between your client and the Valkey node.
In Cluster Mode Enabled, direct connection avoids any single point of failure because each shard is designed to fail independently. For example, if traffic from multiple clients overloads a slot (keyspace chunk), shard failure limits the impact to the shard responsible for serving the slot.
Recommended configurations
We recommend creating highly available multi-zone instances as opposed to single-zone instances because of the better reliability they provide. However, if you choose to provision an instance without replicas, we recommend choosing a single-zone instance. For more information, see Choose a single-zone instance if your instance doesn't use replicas.
To enable high availability for your instance, you must provision at least 1 replica node for every shard. You can do this when Creating the instance, or you can Scale the replica count to at least 1 replica per shard. Replicas provide Automatic failover during planned maintenance and unexpected shard failure.
You should configure your client according to the guidance in Client best practices. Using recommended best practices allows your client to handle the following items for your instance automatically and without any downtime:
The role (automatic failovers)
The endpoint (node replacement)
Cluster Mode Enabled-related slot assignment changes (consumer scale out and in)
Replicas
A highly available Memorystore for Valkey instance is a regional resource. This means that Memorystore for Valkey distributes primary and replica nodes of shards across multiple zones to safeguard against a zonal outage. Memorystore for Valkey supports instances with 0, 1, or 2 replicas per node.
You can use replicas to increase read throughput at the cost of potential data staleness.
- Cluster Mode Enabled: Use the
READONLY
command to establish a connection that allows your client to read from replicas. - Cluster Mode Disabled: Connect to the reader endpoint to connect to any of the available replicas.
Cluster Mode Enabled Instance shapes
The following diagrams illustrate shapes for Cluster Mode Enabled instances:
With 3 shards and 0 replicas per node
With 3 shards and 1 replica per node
With 3 shards and 2 replicas per node
Cluster Mode Disabled Instance shapes
The following diagrams illustrate shapes for Cluster Mode Disabled instances:
With 2 replicas
Automatic failover
Automatic failovers within a shard can occur due to maintenance or an unexpected failure of the primary node. During a failover a replica is promoted to be the primary. You can configure replicas explicitly. The service can also temporarily provision extra replicas during internal maintenance to avoid any downtime.
Automatic failovers prevent data-loss during maintenance updates. For details about automatic failover behavior during maintenance, see Automatic failover behavior during maintenance.
Failover and node repair duration
Automatic failovers can take time on the order of tens of seconds for unplanned events such as a primary node process crash, or a hardware failure. During this time the system detects the failure, and elects a replica to be the new primary.
Node repair can take time on the order of minutes for the service to replace the failed node. This is true for all primary and replica nodes. For instances that aren't highly available (no replicas provisioned), repairing a failed primary node also takes time on the order of minutes.
Client behavior during an unplanned failover
Client connections are likely to be reset depending on the nature of the failure. After automatic recovery, connections should be retried with exponential backoff to avoid overloading primary and replica nodes.
Clients using replicas for read throughput should be prepared for a temporary degradation in capacity until the failed node is automatically replaced.
Lost writes
During a failover resulting from an unexpected failure, acknowledged writes may be lost due to the asynchronous nature of Valkey's replication protocol.
Client applications can leverage the Valkey WAIT command to improve real world data safety.
Keyspace impact of a single zone outage
This section describes the impact of a single zone outage on a Memorystore for Valkey instance.
Multi-zone instances
HA instances: If a zone has an outage, the entire keyspace is available for reads and writes, but since some read replicas are unavailable, the read capacity is reduced. We strongly recommend over-provisioning cluster capacity so that the instance has enough read capacity, in the rare event of a single zone outage. Once the outage is over, replicas in the affected zone are restored and the read capacity of the cluster returns to its configured value. For more information, see Patterns for scalable and reliable apps.
Non-HA instances (no replicas): If a zone has a outage, the portion of the keyspace that is provisioned in the affected zone undergoes a data flush, and is unavailable for writes or reads for the duration of the outage. Once the outage is over, primaries in the affected zone are restored and the capacity of the cluster returns to its configured value.
Single-zone instances
- Both HA and Non-HA instances: If the zone that the instance is provisioned in has an outage, the cluster is unavailable and data is flushed. If a different zone has an outage, the cluster continues to serve read and write requests.