This page describes the cluster and node specifications for Memorystore for Redis Cluster instances. For instructions on how to create an instance, see Create instances.
Choosing a node type
The shards in your cluster all use the same node type of your choosing. The best node type for your cluster depends on your requirements for price, performance, and keyspace capacity.
The redis-standard-small
node type lets you provision small clusters, and grow your cluster by smaller increments at potentially lower costs than other node types. redis-standard-small
also offers the advantage of distributing your keyspace across more nodes with a higher total vCPU count. This offers improved price-performance compared to redis-highmem-medium
, as long as the total keyspace capacity of the smaller nodes is sufficient for your data needs.
We only recommend choosing the redis-highmem-xlarge
node type if you need
more cluster capacity than what redis-highmem-medium
provides. Although the
redis-highmem-xlarge
node type is four times greater than the
redis-highmem-medium
type in size, the performance is not four times greater,
as Redis performance does not scale linearly when vCPUs are added to
increasingly larger nodes (scaling up). Instead, to get better price
performance, you should scale out by adding more nodes to a cluster.
Node type specification
The node capacity and characteristics depend on which of the four available node types you choose:
Keyspace capacity and reserved overhead
Node type | Default writable keyspace capacity | Total node capacity |
---|---|---|
redis-shared-core-nano | 1.12 GB | 1.4 GB |
redis-standard-small | 5.2 GB | 6.5 GB |
redis-highmem-medium | 10.4 GB | 13 GB |
redis-highmem-xlarge | 46.4 GB | 58 GB |
Memorystore automatically sets aside a portion of your instance capacity to help prevent Out Of Memory (OOM) errors. This ensures a smooth experience reading and writing keys. Memory limits and storage details are as follows:
Customizing your storage: While we recommend using the default settings, you have the option to adjust the amount of reserved storage using the
maxmemory
configuration. For information aboutmaxmemory
, see Supported instance configurations.How much storage do you get? Refer to the previous table's Default writable keyspace capacity column. This shows how much storage is available for your keys by default.
Maximizing storage If you want the maximum possible storage, the total node capacity column shows the storage limit when you set the
maxmemory
config to 100%. However, don't recommend choosing amaxmemory
value higher than the default setting.The
redis-shared-core-nano
node type has a hard limit of 1.12 GB, and can't be changed with themaxmemory
configuration.
Node characteristics
Node type | vCPU count | SLA offered | Max clients | Max memory for clients (maxmemory-clients configuration) |
---|---|---|---|---|
redis-shared-core-nano | 0.5 | No | 5,000 | 12% |
redis-standard-small | 2 | Yes | 16,000 (default). Max value is 32,000 | 7% |
redis-highmem-medium | 2 | Yes | 32,000 (default). Max value is 64,000 | 7% |
redis-highmem-xlarge | 8 | Yes | 64,000 | 4% |
Cluster specification
This section shows minimum and maximum cluster capacities given the cluster shape, node type, and replica count.
Minimum writable capacity
Writable capacity is the amount of storage available for writing keys. It is equal to the size of one instance node. Therefore, depending on the node type, the minimum writable capacity is 1.4 GB, 6.5 GB, 13 GB, or 58 GB. The minimum writable capacity isn't affected by the number of replicas you choose.
Maximum writable capacity
Node type and size | Max capacity given cluster shape of 250 primary nodes and 0 replicas per node | Max capacity given cluster shape of 125 primary nodes and 1 replicas per node | Max capacity given cluster shape of 83 primary nodes and 2 replicas per node |
---|---|---|---|
redis-shared-core-nano - 1.4 GB | 350 GB | 175 GB | 116.2 GB |
redis-standard-small - 6.5 GB | 1,625 GB | 812.5 GB | 539.5 GB |
redis-highmem-medium - 13 GB | 3,250 GB | 1,625 GB | 1,079 GB |
redis-highmem-xlarge - 58 GB | 14,500 GB | 7,250 GB | 4,814 GB |
Performance
Using the OSS memtier benchmarking tool in the us-central1
region yielded 120,000 - 130,000 operations per second per 2 vCPU node (redis-standard-small
and redis-highmem-medium
) with microseconds latency and 1KiB data size.
We recommend that you perform your own benchmarking with real workloads or synthetic workloads that resemble your production traffic. In addition, we recommend that you size your clusters with a buffer (or "headroom") for workload spikes or unexpected traffic. For more guidance, see best practices.
Cluster endpoints
This section explains the two endpoints each instance has.
Discovery endpoint
Each instance has a discovery endpoint to which your client connects. It is a combination of an IP address and port number. For instructions on how to find your cluster's discovery endpoint, see View your cluster's discovery endpoint.
Your client also uses it for node discovery. Your client uses the discovery endpoint to retrieve your instance's cluster topology to bootstrap OSS Redis cluster clients, and keep them updated in steady state. The resulting cluster topology provides Redis node endpoints (IP and port combinations) to be cached in-memory by the Redis cluster client. Your client then takes care of the updates and redirections automatically with no other application change required. For information on client discovery behavior and best practices, see Client discovery.
The discovery endpoint is highly available because it is backed by multiple Redis nodes across multiple-zones to serve the cluster topology. Serving topology through the endpoint is robust even when faced with backend node failures or node updates.
Your discovery endpoint has the following behavior:
Your cluster's discovery endpoint remains unchanged throughout the lifecycle of the cluster instance, even during maintenance, or by any other action you take such as scaling in or out or changing replica counts.
Redis node endpoints can change and can be recycled as nodes are added and removed over time. Ideally, you should use a Redis cluster client that can handle these changes automatically through topology refreshes and redirections. Examples of Redis cluster clients can be found at Client library code samples. Your application shouldn't have dependencies or assumptions that node endpoints will remain unchanged for a given cluster.
Data endpoint
Each instance also has a Private Service Connect data endpoint that Memorystore for Redis Cluster uses for client connection. You shouldn't connect to this directly, however Memorystore for Redis Cluster uses this endpoint for connecting your client to cluster nodes.