Best Practices for App Engine Memcache

This article outlines some best practices for using the Google App Engine Memcache feature. The main issues discussed are concurrency, performance, and migration. Developers using Memcache will benefit by becoming aware of pitfalls and considerations for making code more robust.


Memcache is an App Engine feature that provides in-memory, temporary storage. It is provided primarily as a cache for rapid retrieval of data that is backed by some form of persistent storage, such as Google Cloud Datastore. Ideal use cases for Memcache include caching published data like blogs, news feeds, and other content that is read but not modified. Data stored in Memcache can be evicted at any time, so applications should be structured in a way that does not depend on the presence of entries. Memcache can serve concurrent requests to multiple, remote clients accessing the same data. For this reason, data consistency should be kept in mind. Memcache statistics, including hit rate, are shown on the Google Cloud Console, and can aid in optimizing performance.

There are two classes of Memcache service, shared and dedicated. Shared Memcache is a no-cost, best-effort service, and data stored with this service can be evicted at any time. Dedicated Memcache is a paid service that enables you to reserve memory for an application. This provides developers with a more predictable level of service. However, data may still be evicted if more data needs to be stored than space reserved in Memcache, since keys are evicted when the cache is full. In addition, data may be evicted if the Memcache service is restarted for planned or unplanned maintenance. The API signatures and usage are identical for both shared and dedicated Memcache. The difference in use is a configuration change in the Google Cloud Console. Dedicated Memcache allows for a specific quantity of memory to be allocated for the application.


This paper assumes that readers have a basic knowledge of App Engine, including programming in one of the supported languages: Python, Java, Go, or PHP. The examples in this paper are provided in Python. Links to the documents introducing Memcache API for Python in other supported languages are provided in the Additional Resources section.

Code Complexity and Concurrency

Memcache can be used to cache data that's persisted in storage systems like Cloud Datastore or Google Cloud SQL. However, in some cases, writing code to keep Memcache synchronized with application data that is written and modified in a persistent datastore can be challenging. In this section we collect some best practices gained from experience dealing with code complexity to avoid potential pitfalls in concurrent access:

  • It is simpler to use Memcache where data is only read because there is no risk of inconsistency between Memcache and the data store.
  • When using Memcache for data that is both read and modified, use the Python NDB Client Library. This is a good practice because the NDB platform code handles concurrent access coordination and error conditions robustly. As a result, your application code will be simplified.


Read-only data is ideal for storing in Memcache. Example use cases for this include serving blogs, RSS feeds, and other published data that is read, but not modified.

Memcache is a global cache service that is shared by multiple frontend instances and requests. Concurrency control is required when application logic requires read consistency to shared data that is updated by multiple clients. Transactional data sources, such as relational databases or Cloud Datastore, coordinate concurrent access by multiple clients. However, Memcache is not transactional. This is an important point to be aware of when using Memcache. There is a possibility that two clients will read the same data in Memcache at the same time and modify it simultaneously. As a result, the data stored may be incorrect. In contrast, if the application logic does not require read consistency to shared data, there is no need for special code to coordinate concurrent requests. An example of this is updating the timestamp for accessing a document. If two clients try to update the last accessed date at the same time then the last one that does the update wins. This is an expected result.

One approach to solving concurrent access to shared data within the context of a simple, local execution environment is to synchronize threads so that only one thread can execute at a time within a critical section. With App Engine, in the context of code executing across multiple instances, the "compare and set" (Client.cas()) function can be used to coordinate concurrent access. However, the disadvantage of this type of function is that, if it fails, the application must be prepared to do the error handling and retry.

Concurrency problems can be hard to detect. Problems do not usually appear until the application is under load from many users.


The incr() and decr() functions are provided for atomic execution of the increment and decrement operations. Use the atomic Memcache functions where possible, including incr(), decr(), and use the cas() function for coordinating concurrent access. Use the Python NDB Client Library if the application uses Memcache as a way to optimize reading and writing to Cloud Datastore.

Data Serialization and Migration

App Engine comes with the tools needed to seamlessly deploy upgrades to the application without downtime or errors. However, careful planning for migration is still needed.

The App Engine Memcache feature includes language-specific libraries that each have a similar but slightly different flavor. In many cases, the Memcache libraries take care of data serialization in a transparent way. Python is used in this paper to illustrate the points about Memcache. However, Python is only one of the four languages supported by App Engine. The Memcache APIs for the different supported languages are similar but there are some differences. Language-specific serialization, such as pickling in Python, is used to serialize objects of most Python classes. In Java, an implementation of the JCache API, which handles data values as byte arrays, is provided. So, it is the responsibility of Java application developers to code for serialization and deserialization of objects at the application level. In contrast, language-independent serialization with protobuf is used in other cases. In particular, protobuf is used for classes extending db.Model in Python.

Changing the versions of the objects stored or upgrading to a new API version may generate errors, depending on how deserialization is handled. Application code should be ready to handle errors gracefully to give users a seamless experience when new code is rolled out.


When rolling out new code, deserialization of the various serialized formats of old objects should be tested. App Engine was first released with Python 2.5 and later upgraded to 2.7. Errors in Python object serialization, called pickling, were a source of migration problems for applications that did not test this point. Python 2.5 objects were stored as pickled Python objects, code was upgraded to Python 2.7, and objects were retrieved from Memcache. However, the Python 2.5 objects could not be unmarshalled ("unpickled") by the App Engine Python 2.7 runtime, which generated errors. Avoiding exceptions like this has been a frequent source of forum discussions.

Besides migration from one version of Python to the next, similar problems can happen with serialization in other languages as well. Also, the same thing can happen when migrating between different versions of application code. Testing is needed whenever making changes to classes where objects are serialized with an older structure.

There are four ways of avoiding problems like this: (1) make compatible changes to objects, (2) handle errors properly, (3) flush the cache before deploying new code, or (4) use namespaces to isolate data in a multitenant-like way.

If the application uses modules developed with multiple languages, follow the best practices for key and value compatibility discussed in the section Sharing memcache between different programming languages.


Make compatible changes to object structures, handle errors properly when reading objects from Memcache, and flush Memcache when deploying new code with major changes.


This example demonstrates the problems that may occur when migrating code where objects are stored in Memcache. The code below defines the class Person and the function get_or_add_person to retrieve or add a person with the given name to Memcache.

class Person(ndb.Model):
    name = ndb.StringProperty(required=True)

def get_or_add_person(name):
    person = memcache.get(name)
    if person is None:
        person = Person(name=name)
        memcache.add(name, person)
    else:'Found in cache: ' + name)
    return person

After running this code in production for some time, the new userid field for class Person is added. The following code shows the changes made:

class Person(ndb.Model):
    name = ndb.StringProperty(required=True)
    userid = ndb.StringProperty(required=True)

def get_or_add_person(name, userid):
    person = memcache.get(name)
    if person is None:
        person = Person(name=name, userid=userid)
        memcache.add(name, person)
    else:'Found in cache: ' + name + ', userid: ' + person.userid)
    return person

The problem created by the upgrade is that objects with the old schema were stored in Memcache and are not compatible with the new structure of the object that has a required userid field. When tested, the Memcache service successfully unmarshalled Person objects with the old structure. It is surprising that the old objects can be successfully unmarshalled because the new userid field is required. In general, do not rely on this kind of behavior when migrating code without testing. However, a problem was caused by the log statement, which generated an error by referring to the person.userid field. After flushing Memcache from the Google Cloud Console, no more errors were generated.

Java is somewhat different, but the same principle applies. A best practice in Java serialization is the use of a serialVersionUID value in the serialized object. This gives application programmers a clean error when deserializing an older format and provides an opportunity to customize the deserialization.

Sharing memcache between different programming languages

An App Engine app can be factored into one or more modules and versions. Sometimes it is convenient to write modules and versions in different programming languages. You can share the data in your memcache between any of your app's modules and versions. Because the memcache API serializes its parameters, and the API may be implemented differently in different languages, you need to code memcache keys and values carefully if you intend to share them between languages.

Key Compatibility

To ensure language-independence, memcache keys should be bytes:

  • In Python use plain strings (not Unicode strings)
  • In Java use byte arrays (not strings)
  • In Go use byte arrays
  • In PHP use strings

Remember that memcache keys cannot be longer than 250 bytes, and they cannot contain null bytes.

Value Compatibility

For memcache values that can be written and read in all languages, these are the types you can use:

  • Byte arrays and ASCII strings.
  • Unicode strings, but you must encode and decode them properly in Go and PHP.
  • Integers in increment and decrement operations, but you must use 32-bit values in PHP.

Avoid using these types for values that you want to pass between languages:

  • Integers (other than in increment and decrement operations) because Go does not directly support integers and PHP cannot handle 64-bit integers.
  • Floating point values and complex types like lists, maps, structs, and classes, because each language serializes them in a different way.

To handle integers, floating point, and complex types, we recommend that you implement your own language-independent serialization that uses a format such as JSON or protocol buffers.


The example code below operates on two memcache items in Python, Java, Go, and PHP. It reads and writes an item with the key "who" and increments an item with the key "count". If you create a single app with separate modules using these four code snippets, you will see that the values set or incremented in one language will be read by the other languages.


public class MemcacheBestPracticeServlet extends HttpServlet {

  public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException,
      ServletException {
    String path = req.getRequestURI();
    if (path.startsWith("/favicon.ico")) {
      return; // ignore the request for favicon.ico

    MemcacheService syncCache = MemcacheServiceFactory.getMemcacheService();

    byte[] whoKey = "who".getBytes();
    byte[] countKey = "count".getBytes();

    byte[] who = (byte[]) syncCache.get(whoKey);
    String whoString = who == null ? "nobody" : new String(who);
    resp.getWriter().print("Previously incremented by " + whoString + "\n");
    syncCache.put(whoKey, "Java".getBytes());
    Long count = syncCache.increment(countKey, 1L, 0L);
    resp.getWriter().print("Count incremented by Java = " + count + "\n");


w.Header().Set("Content-Type", "text/plain")
c := appengine.NewContext(r)

who := "nobody"
item, err := memcache.Get(c, "who")
if err == nil {
	who = string(item.Value)
} else if err != memcache.ErrCacheMiss {
	http.Error(w, err.Error(), http.StatusInternalServerError)

fmt.Fprintf(w, "Previously incremented by %s\n", who)
memcache.Set(c, &memcache.Item{
	Key:   "who",
	Value: []byte("Go"),

count, _ := memcache.Increment(c, "count", 1, 0)
fmt.Fprintf(w, "Count incremented by Go = %d\n", count)


$memcache = new Memcached;
$memcache->set('who', $request->get('who'));
return $twig->render('memcache.html.twig', [
    'who' => $request->get('who'),
    'count' => $memcache->increment('count', 1, 0),
    'host' => $request->getHost(),

Other Best Practices

Several other best practices are discussed in this section.

Handling Memcache API Failures Gracefully

Memcache errors must be handled properly for code to remain functionally correct. While the application can fall back on data stored in the Cloud Datastore when data is not found, it must make sure that values stored in Memcache are correct. Failure to update Memcache successfully can result in stale data that, if it is read later on, may result in incorrect program behavior. The following code fragment demonstrates this concept:

if not memcache.set('counter', value):
    logging.error("Memcache set failed")
    # Other error handling here

The set() method returns False if an error is encountered when adding or setting data in Memcache. If the value cannot be set, the application could retry, clear the data from Memcache, flush Memcache, or raise an exception. Retrying may be the most graceful method, but it will depend on the particular case. In many cases, set and retry are not the correct semantics to use. The following is simple pseudo-code for a more robust write sequence that synchronizes Memcache and a persistent store:

memcache.delete(key, seconds)  # clears cache
# write to persistent datastore
# Do not attempt to put new value in cache, first reader will do that

Pseudo-code for one simple read sequence is shown below.

v = memcache.get(key)
if v is None:
    v = read_from_persistent_store()
    memcache.add(key, v)

Notice that memcache.set() is not used in either sequence, in order to avoid the risk of stale data being left in the cache. However, this code has the risk of a race condition. If a second client reads when the first client is between the memcache.delete() and the completion of write-to-persistent, then the second client may get the old value from the persistent store and re-cache it. The seconds argument to memcache.delete(), also known as "lock duration", has been be added here to reduce the risk of this race condition. In conclusion, when trying to synchronize between Memcache and Cloud Datastore, it is best to use a library like NDB, because robust handling of edge cases is required to keep data in sync.

Use Batch Capability when Possible

The use of batch capabilities is ideal for retrieving multiple related values. These functions avoid retrieving inconsistent values and provide faster performance. The following code fragment is an example of how batch capabilities can be used:

values = {'comment': 'I did not ... ', 'comment_by': 'Bill Holiday'}
if not memcache.set_multi(values):
    logging.error('Unable to set Memcache values')
tvalues = memcache.get_multi(('comment', 'comment_by'))

The code fragment demonstrates setting multiple values with the function set_multi() and getting multiple values with the get_multi() function. Note that when setting multiple values, if two values are intrinsically related, then they may be best stored as a single value.

Distribute Load Across the Keyspace

The dedicated Memcache service can handle up to 10,000 operations per second per gigabyte. (See the Memcache Documentation for details of dedicated Memcache and its limitations.) Developers may need to be mindful of this limit for App Engine applications that handle a large number of requests. The 10k ops/s limit may be large enough to process several hundred HTTP requests per second, but some applications have a skewed distribution pattern for key usage. For example, 80 percent of Memcache access calls may be concentrated on only 20 keys. This may be a problem if the keys are concentrated on particular servers due to poor key name distribution, and can lead to errors and performance slowdowns.

To avoid performance delays, developers may spread data access across multiple keys and aggregate the data after it is read. For instance, when writing to a counter many times, spread the counter across many keys and then add up the values after the counter is read from Memcache. The Memcache viewer in Google Cloud Console displays a list of top keys with QPS over 100 for dedicated memcache users, which can be used to identify bottlenecks.

The Memcache operation rate varies by item size approximately according to the table on the Memcache documentation page. The maximum rate of operations per second drops off quickly for larger entry sizes. The solution to this is to shard large values into multiple items or to compress the large values.

Additional Resources

The following links reference additional resources that may also be useful: