A Memorystore for Redis instance, if not correctly managed and configured, can experience memory pressure which can impact application performance. This page describes best practices that you can use to efficiently manage your instance's memory usage.
In this topic:
Memory management concepts - key concepts you need to know to help you keep your Memorystore instance healthy.
Memory intensive operations - operations that can cause memory pressure.
Monitor your instance's memory usage - metrics that you should monitor to learn about how your instance uses memory.
Resolving an out of memory condition - steps to take when resolving issues related to an out of memory condition.
Right-size your Memorystore instance - rather than creating an over-provisioned instance, learn how to right-size your instance.
Memory management concepts
This section introduces concepts that you need to understand in order to manage your instance's memory usage.
- The instance capacity is the amount of memory you provision in Gigabytes (GB), and what you are charged for. For more details on selecting the right instance capacity, see Right-size your Memorystore instance.
Maxmemory is a Redis configuration that allows you to set the memory limit at which your eviction policy takes effect. Memorystore for Redis designates this configuration as
maxmemory-gb. When you create an instance,
maxmemory-gbis set to the instance capacity. Depending on the system memory usage ratio metric, you might be required to lower the
maxmemory-gblimit to provide memory overhead for workload spikes.
For more details, see Manage your system memory usage ratio.
To learn how to adjust
maxmemory-gb, see Configuring Redis instances.
System memory usage ratio
The system memory usage ratio metric allows you to measure the memory usage of an instance relative to the system memory. System memory is managed automatically by Memorystore to handle memory usage spikes caused by memory intensive operations and memory fragmentation which is common in open source Redis.
If the system memory usage ratio metric exceeds 80%, this indicates that the instance is under memory pressure and you should follow the instructions at Manage your system memory usage ratio. If you do not take action and your memory usage continues to grow, you risk an instance crash due to insufficient memory. Your system memory usage ratio metric might exceed 80% due to memory fragmentation. Alternatively, if the metric spikes quickly to 80% or higher, you might have used one of the memory intensive operations.
- The used memory metric shows how much data is in your
Memorystore instance. The used memory of an instance can grow up
maxmemory-gbconfiguration limit. When used memory exceeds your
maxmemory-gblimit, your eviction policy takes effect.
Your instance's eviction policy (also known as maxmemory policy) determines how Redis evicts keys when your instance data reaches the
maxmemory-gblimit. Redis evicts keys as part of the normal cache use case. Key eviction happens as a background process, therefore keys are not evicted immediately after the
maxmemory-gblimit is reached. A high write-rate might outpace key eviction resulting in an out of memory condition.
The default eviction policy of a Memorystore instance is
volatile-lru. If you are using a
volatile-*eviction policy, make sure you are setting TTLs on keys that you want to expire, otherwise Redis has no keys to evict.
For a list of eviction policies, see Maxmemory policies.
To learn how to change your eviction policy, see Configuring Redis instances.
- Memory fragmentation
can cause your Memorystore instance to run out of memory even when
the used memory to
maxmemory-gbratio is low. Memory fragmentation happens when the operating system allocates memory pages which Redis cannot fully utilize after repeated write and delete operations. The accumulation of such pages can result in the system running out of memory and eventually causes the Redis server to crash. The
activedefragRedis configuration can help to reduce fragmentation.
Redis versions 4.0 and higher provide an
activedefragconfiguration. If possible, you should create your Memorystore instance using Redis 4.0. Memorystore sets
activedefragto "no" by default. Setting
activedefragto "yes" comes with a CPU trade off, but it can help mitigate memory fragmentation, which contributes to out of memory issues.
If the system memory usage ratio metric indicates memory fragmentation, you should turn on
activedefragremains an optional setting.
Memory intensive operations
The following operations use significant memory, especially when run in conjunction with a high write-rate:
The Memorystore export
feature uses the Redis BGSAVE operation,
which uses copy-on-write. Depending on the size of the data, write volume, and
keys touched, the required memory for an export can be double the size of the
space your data occupies. Therefore, for exports to succeed, you might need to
maxmemory-gb limit to 50% of your instance capacity during exports.
Scaling and version upgrade operations for Standard Tier instances
Scaling or Upgrading during periods of high write-load can put memory pressure on your instance due to memory overhead caused by replication. If a scale or upgrade operation fails due to memory pressure, you should:
- Reduce the
maxmemory-gbto 50% of your instance capacity before a scale/upgrade operation. If possible, you should also lower maxmemory during periods of low instance traffic, because doing so reduces the negative impact lowering maxmemory has on your cache-hit ratio.
- Scale/upgrade during periods of low-writes.
Monitor your instance's memory usage
Monitor the metrics and set the alerts outlined in this section. These metrics and alerts give you insight into your instance's memory usage. To learn how to view metrics and set alerts, see Monitoring Redis instances.
Memory Management Related Metrics
|Metric||Full metric address|
|Memory usage ratio||redis.googleapis.com/stats/memory/usage_ratio|
|System memory overload duration||redis.googleapis.com/stats/memory/system_memory_overload_duration|
|System memory usage ratio||redis.googleapis.com/stats/memory/system_memory_usage_ratio|
|Cache hit ratio||redis.googleapis.com/stats/memory/cache_hit_ratio|
Memory Usage Ratio
The memory usage ratio metric indicates how close your working set size is
to reaching the
maxmemory-gb limit. Unless the eviction policy is set to
no-eviction, the instance data reaching maxmemory does not always
indicate a problem. However, key eviction is a background process that takes
time. If you have a high write-rate, you could run out of memory before Redis
has time to evict keys to free up space.
System memory usage ratio
System memory usage ratio is a critical metric to monitor. To ensure the instance has sufficient memory to support your workload and other memory intensive operations, it is important to always have enough system memory available.
Set an alert to notify you if the system memory usage ratio metric reaches 80%. If it reaches 80%, you should begin to monitor the system memory usage ratio metric more closely. If system memory usage ratio continues to grow dramatically, you should turn on activedefrag, lower maxmemory, and consider scaling your instance.
Once system memory usage ratio hits 100%, any operation that further increases your instance's memory footprint is blocked and Redis returns the following error:
-OOM command not allowed under OOM prevention.
See Manage your system memory usage ratio for more details.
System memory overload duration
If your memory usage is too high, Memorystore blocks writes to your instance to keep your instance healthy. System memory overload duration tracks how long your instance is in the blocked-writes state.
You should Set an alert
for this metric so that you know when your writes are being blocked for your
instance. Also, you can refer back to this metric to troubleshoot receiving the
-OOM command not allowed under OOM prevention. error.
You should regularly monitor your cache-hit ratio so that you know what
percentage of key lookups are successfully returned by keys in your Redis
instance. Generally speaking, a higher cache-hit ratio is better than a lower
cache-hit ratio. You should make a note of your cache-hit ratio before you make
any large configuration changes such as adjusting the
maxmemory-gb limit, changing your eviction policy, or
scaling your instance. Then, after you modify your instance, check the cache-hit
ratio again to see how your change impacted this metric.
Expirable keys and Expired keys
The Stackdriver metric expirable keys monitors the number of keys that are
set for expiration. If there are no expirable keys, it can be an indication that
you are not setting TTLs on keys. In such cases, when your instance data reaches
maxmemory-gb limit, there are no keys to evict which can result in an out
of memory condition.
Another metric you can monitor is expired keys. If the metric shows many
expired keys, but you still see memory pressure on your instance, you should
Resolving an out of memory condition
Below are some best practices you should follow if your instance is experiencing memory pressure or running into out of memory errors.
For instances running Redis 4.0 and higher:
- Turn on
activedefragfor your instance. See Active defragmentation for more details.
- Turn on
Learn how to to adjust maxmemory when running Memory intensive operations.
Consider scaling up your instance capacity.
If you are still encountering OOM conditions, then contact Google Cloud Platform Support.
Right-size your Memorystore instance
This section covers three distinct approaches that help you right-size your instance based on your workload:
- Determine the initial size of a Memorystore instance explains how to estimate your instance size before you create it.
- Monitor your instance's memory usage outlines metrics that provide helpful information regarding how your instance is using memory.
- Manage your system memory usage ratio explains what to do if the system memory usage ratio exceeds 80%.
Determine the initial size of a Memorystore instance
First, you should choose whether you want a Standard Tier or Basic Tier instance. To learn more about Memorystore for Redis tiers, refer to Redis tier capabilities. Once you have selected the right tier for your application, follow these steps to determine what instance size you need:
Determine the size of your data.
- Estimate the number of keys, and the average size of keys, that your application will write to your Redis instance. Multiply these values to get a rough estimate of the instance size you need.
Choose an eviction policy.
- If you use the
noevictionmaxmemory policy your instance size must be big enough to hold your peak workload and working set. If you run out of memory with this maxmemory policy, your instance can enter an out of memory condition.
- Other eviction policies do not influence the instance size you should provision.
- If you use the
Provision extra memory for Standard Tier instances
- Unlike Basic Tier instances, Standard Tier instances reserve 10% of instance capacity as a replication buffer. If you choose a Standard Tier instance, make sure you take the data estimation from step 1, and provision an extra 10% for the replication buffer.
Estimate your average and peak write-rate
- If possible, estimate the write-rate, and size of keys that your application will use. The write-rate compared with the rate of key-removal determines how fast your instance will grow over time.
Scale up to reach your desired cache-hit ratio
- Monitor the cache-hit ratio, and if you are not getting as many successful cache-hits as you want, this means you either need to increase your instance size, or make sure that your application is writing the keys to your Memorystore instance that are being requested, and not fulfilled.
Determine if your instance is blocking writes due to an out of memory condition
If you get the following error:
-OOM command not allowed under OOM prevention.
Then check if:
- The system memory usage ratio metric exceeded 80% right before your instance started experiencing issues.
- The system memory usage ratio grew very rapidly before issues with your instance occurred.
- The system memory overload duration metric showed values above zero during the same period where you experienced blocked writes.
If so, this likely indicates that the instance is blocking writes due to an out of memory condition.
Manage your system memory usage ratio
Set an alert to notify you if the system memory usage ratio metric exceeds 80%. If system memory usage ratio exceeds 80%, you should take appropriate action so that the instance does not run out of memory. Depending on the write volume and key access pattern, system memory usage can potentially increase to 100% quickly. Memorystore provides the following ways to manage the system memory usage ratio:
- Turn on
activedefragfor instances running Redis version 4.0 and higher.
- Lower the
maxmemory-gblimit of your instance.
- Scale up the instance.
- Choose the appropriate eviction policy.
- Set TTLs on volatile keys.
- Manually delete keys from your instance.
Turn on activedefrag
If the system memory usage ratio exceeds 80%, turn on
instances running Redis version 4.0 and higher). Defragmentation can take hours to release
fragmented memory. If write traffic is high, defragmentation alone may not be
enough to stop your instance from running out of memory. Therefore, you may need
to implement the following recommendations:
Lower the maxmemory limit of your instance
If system memory usage ratio exceeds 80% you should lower
first view how the system memory usage ratio has changed over time to determine
maxmemory-gb limit to set.
Scenario 1: System memory usage ratio has been gradually and slowly
climbing. Fragmentation is a likely issue, and you should lower
small increments until system memory usage ratio stabilizes below 80%.
Scenario 2: System memory usage ratio spiked rapidly, and you see a
significant write-load on your instance. A Memory intensive operation
likely caused the spike. In this situation you should lower the
limit in larger increments to ensure that the instance either avoids entering an
out of memory condition, or recovers from an out of memory condition.
You should take note that lowering maxmemory can reduce your instances cache-hit ratio.
A much lower cache-hit ratio indicates that you should scale up your instance so
that your application can profit from the advantages of using Redis. To learn
how to adjust the
maxmemory-gb configuration, see Configuring Redis instances.
Scale up your instance
Follow the instructions at Scaling Redis instances to increase your instance capacity.
Maxmemory scaling example:
If you have a 10 GB instance with
maxmemory-gb set to 8 GB, you have 8 GB for
storing keys and 2 GB of memory overhead. If you scale the instance to 20 GB,
maxmemory-gb is scaled to 16 GB. Therefore, your instance now has 16 GB of memory
for storing keys, and 4 GB of overhead.
See Scaling Redis instances for
instructions on how to increase or decrease the size of your instance.
Choose the appropriate eviction policy
If you are storing volatile data, choose one of the
volatile-* eviction policies.
If you are storing data that isn't volatile, choose one of the
Manually delete keys from your instance
You can improve out of memory conditions by deleting keys from your instance manually. This is a temporary solution that helps you improve your instance health.