Hide
App Engine

Effective memcache

Jeff Scudder
June 2009; Updated August 2015

This is part five of a five-part series on effectively scaling your App Engine-based apps. To see the other articles in the series, see Related links.

This article explores techniques for using memcache to improve performance of your application.

What is it?

Memcache is a distributed RAM cache in which you can store transient data using a key-value model. Memcache hits and writes never touch disk, and access to memcached items can be much faster compared with data in persistent storage systems. For example, memcache is generally an order of magnitude faster than Google Cloud Datastore, in both reading and writing. The tradeoff is that data held in memcache is transient and is evicted as the system runs out of memcache space. On rare occasions, all memcached data can be evicted at once. But thanks to its faster operation times, use of memcache as a lookup layer or transient storage in addition to your primary storage can greatly improve the responsiveness of your application.

For more details, see the memcache section in the docs:

How should I use it?

Depending on the design of you application, there are likely several places in which memcache could be used to improve performance. Here are a few examples which might spark some ideas for where it would make sense to use memcache in your own application:

If you have requests which are requested frequently and the same data is shown to all users, e.g. the front page of your blog or website homepage, overall performance and cost can be improved by caching the data in memcache.

To expand on the blog example, say that you use a Django template to render the 12 most recent blog posts that are stored in the datastore. Because new blog posts are added infrequently (once a day), there is no need to re-query and re-render the same content over and over. The rendered page could be stored in memcache with an expiration time. When the front page is requested, a cache hit will avoid the cost in time and quota for the datastore query and generation of the HTML from the template.

In pseudocode, this technique would look like this:

if memcache.has_cached('blog front page') {
  response.write(memcache.get('blog front page'))
} else {
  template_values['recent_posts'] = BlogPost.all().orderby('-date').fetch(12)
  html = render_template(template_values, front_page_template)
  memcache.set('blog front page', 5 minutes, html)
  response.write(html)
}

This technique could be used for a page that is often sent to an individual user as well, though care must be taken to ensure that the correct cached data is sent to the current user. In these cases, you can use the current user's ID as part of the memcache key.

Transient and frequently updated data

The caveat here is that this data may be cleared from memcache at any time. If you are storing something exclusively in memcache, your application should gracefully handle a situation in which this data is cleared.

Example: Page view counter

One example of this type of usage could be a page view counter. Each visit could increment a counter and frequent updates would have more throughput when writing to memcache instead of writing to the datastore. Multiple writes per second to the same datastore entity (or entity group) can lead to contention, and techniques for avoiding contention are explored in more detail in Avoiding Datastore Contention. Instead of writing to the datastore with each increment, the application could increment in memcache and have a cron job to persist the view count. The persistence cron job would be run every so often and adds the hits since the last run to the persistent counter in the datastore. The memcache counter would be reset to zero and would then record hits until the next persistence job is run.

In pseudocode, in the application:

// Page view code:
memcache.incr(current_page_name + ' hits')

In the cron job:

// Cron job request handler
if memcache.has_cached('front page hits') {
  persistent_counter = datastore.get_by_key_name('front page hits')
  persistent_counter.count += memcache.get('front page hits')
  persistent_counter.put()
  memcache.set('front page hits', 0)
}

In this scenario, we are not concerned that our page hits counter be 100% accurate. It is okay if the number of hits is underreported.

Note that the pseudocode above creates the page counter as a single entity, which can result in a skewed distribution pattern for key usage, where a large percentage of memcache updates and reads are concentrated on the key front_page_hits. A better pattern would be to shard the counter into multiple entities and write to a different shard on each increment, then aggregate the counts to yield total pageviews. Trading exact counts for fuzzier counts with fewer datastore writes could reduce the cost of these examples and improve scalability.

Read more about sharded counters in Distributing load across the keyspace and the article Sharding counters.

Caching frequently fetched entities

Performing a get against memcache is usually faster than a datastore get on an entity. As of this writing, memcache gets are of the order of one millisecond, while datastore gets are an order of magnitude slower (though these numbers may change in the future). If you have an entity which you plan on fetching frequently using the key, key name, or ID, storing a copy of the entity in memcache could speed up access. The design for this type of caching is straighforward: your app should make an update to the entity to both the datastore and memcache. Subsequent reads should check memcache for the entity first, and if the entity is not in the cache, your app should perform a get against the datastore.

Conclusion

When designing your application, take the time to consider which datasets can be cached for future reuse. These could be commonly viewed pages or often read datastore entities, just to name a few. There may also be some data in your application that you would like to have shared among all instances of your app but that does not need to be persisted forever. In such cases, memcache can improve the scalability of your app by providing a fast and efficient distributed storage system for transient data. Adding memcache logic to your server side code is often well worth the few extra lines of code.