viewing Apigee X documentation.
View Apigee Edge documentation.
When you deploy an API proxy that includes a caching policy, short-lived, L1 cache is automatically created. This short-lived data is then persisted in a database where it is available to all message processors deployed in an environment.
In-memory and persistent cache levels
Both the shared and environment caches are built on a two-level system made up of an in-memory level and a persistent level as shown in the following figure. Policies interact with both levels as a combined framework. Apigee manages the relationship between the levels.
- Level 1 is an in-memory cache (L1) for fast access. Each message processing (MP) node
has its own in-memory cache (implemented from Caffeine cache) for the fastest response to requests.
- L1 is a short-lived (1 second), in-memory cache.
- As the memory limit is reached, Apigee removes cache entries from memory (although they are still kept in the L2 persistent cache) to ensure that memory remains available for other processes.
- L1 is provided with short-lived one-second cache to perform faster lookups for concurrent requests with the same cache key.
- Level 2 is a persistent cache (L2) beneath the in-memory cache. All message processing
nodes share a cache data store (Cassandra) for persisting cache entries.
- Cache entries persist here even after they are removed from L1 cache, such as when in-memory limits are reached.
- Because the persistent cache is shared across message processors (even in different regions), cache entries are available regardless of which node receives a request for the cached data.
- Only entries of a certain size may be cached, and other cache limits apply. See Managing cache limits.
- The cache content in C* is encrypted with the AES-256 algorithm. Data is decrypted before being used by the runtime and is encrypted before being written into C*. So the encryption process is invisible to users.
How policies use the cache
The following describes how Apigee handles cache entries as your caching policies do their work.
- When a policy writes a new entry to the cache (PopulateCache or
- Apigee writes the entry to the in-memory L1 cache on only the message processor that handled the request. If the memory limits on the message processor are reached before the entry expires, then Apigee removes the entry from L1 cache.
- Apigee also writes the entry to L2 cache.
- When a policy reads from the cache (LookupCache or ResponseCache policy):
- Apigee looks first for the entry in the in-memory L1 cache of the message processor handling the request.
- If there is no corresponding in-memory entry, Apigee looks for the entry in the L2 persistent cache.
- If the entry is not in the persistent cache:
- When a policy updates or invalidates an existing
cache entry (InvalidateCache policy, PopulateCache policy, or ResponseCache policy):
- The message processor receiving the request deletes the entry from the one-second in-memory L1 cache and also deletes the entry from the L2 cache.
- After an update or invalidation, it is possible that the other message processors will still hold onto the in-memory L1 cache.
- Since L1 is configured to expire in one second, there is no delete/update event needed to remove the entry from L1.
Managing cache limits
Through configuration, you can manage some aspects of the cache. The overall space available for in-memory cache is limited by system resources and is not configurable. The following constraints apply to cache:
- Cache limits: Various cache limits apply, such as name and value size, total number of caches, the number of items in a cache, and expiration.
In-memory (L1) cache. Memory limits for your cache are not configurable. Limits are
set by Apigee for each message processor that hosts caches for multiple customers.
In a hosted cloud environment, where in-memory caches for all customer deployments are hosted across multiple shared message processors, each processor features an Apigee-configurable memory percentage threshold to ensure that caching does not consume all of the application's memory. As the threshold is crossed for a given message processor, cache entries are evicted from memory on a least-recently-used basis. Entries evicted from memory remain in L2 cache until they expire or are invalidated.
- Persistent (L2) cache. Entries evicted from the in-memory cache remain in the persistent cache according to configurable time-to-live settings.
The following table lists settings you can use to optimize cache performance.
|Expiration||Specifies the time to live for cache entries.||None|