App Engine locations
App Engine is regional, which means the infrastructure that runs your apps is located in a specific region, and Google manages it so that it is available redundantly across all of the zones within that region.
Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your apps are run. You can generally select the region nearest to your app's users, but you should consider the locations where App Engine is available as well as the locations of the other Google Cloud products and services that your app uses. Using services across multiple locations can affect your app's latency as well as its pricing.
You cannot change an app's region after you set it.
If you already created an App Engine application, you can view its region by doing one of the following:
gcloud app describecommand.
Open the App Engine Dashboard in the Google Cloud console. The region appears near the top of the page.
Cloud NDB is a client library for Python that replaces App Engine NDB.
App Engine NDB enables Python 2 apps to store and query data in Firestore in Datastore mode (Datastore) databases. Cloud NDB enables Python 3 apps to store and query data in the same databases and uses Datastore as the product that manages those databases. Although the Cloud NDB library can access any data created with App Engine NDB, some structured data types stored using Cloud NDB cannot be accessed with App Engine NDB. For that reason, migrating to Cloud NDB should be considered irreversible.
Newer versions of Cloud NDB only support Python 3. Previous versions supported both Python 2 and Python 3. To migrate your Python 2 application to Python 3, we recommend the following:
- Migrate to Cloud NDB, version 1.11.2. This is the final release which supports Python 2.
- Upgrade your application to Python 3.
- Upgrade to the latest stable version of Cloud NDB.
This incremental approach to migration enables you to maintain a functioning and testable app throughout the migration process.
Cloud NDB is intended for Python developers who are familiar with App Engine NDB and want to modernize their applications. Migrating to Cloud NDB helps prepare your Python 2 app for Python 3 and gives you the option of moving off of App Engine at a later time if desired.
For more information about Cloud NDB, see the following pages:
Comparison of App Engine NDB and Cloud NDB
- Cloud NDB supports almost all of the features supported by App Engine NDB with only minor differences in method syntax.
App Engine NDB APIs that rely on App Engine Python 2.7 runtime-specific services have either been updated or removed from Cloud NDB.
New features in Python 3 and Django have eliminated the need for
google.appengine.ext.ndb.django_middleware. Instead, you can easily write your own middleware with just a few lines of code.
App Engine NDB required apps and the Datastore database to be in the same Google Cloud project, with App Engine providing credentials automatically. Cloud NDB can access Datastore mode databases in any project, as long as you authenticate your client properly. This is consistent with other Google Cloud APIs and client libraries.
Cloud NDB doesn't use the App Engine Memcache service to cache data.
Instead, Cloud NDB can cache data in a Redis in-memory data store managed by Memorystore, Redis Labs, or other systems. While only Redis data stores are currently supported, Cloud NDB has generalized and defined caching in the abstract
GlobalCacheinterface, which can support additional concrete implementations.
To access Memorystore for Redis, your app needs to use Serverless VPC Access.
Neither Memorystore for Redis nor Serverless VPC Access provide a free tier, and these products may not be available in your app's region. See Before you start migrating for more information.
The full list of differences is available in the migration notes for the Cloud NDB GitHub project.
Before you start migrating
Before you start migrating:
If you need to cache data, make sure your app's region is supported by Serverless VPC Access and Memorystore for Redis.
Determining if you need to cache data
You should make sure your app needs caching, since Memorystore for Redis and Serverless VPC Access don't have a free tier and don't support all Google Cloud regions.
If your app frequently reads the same data, caching could decrease latency.
The more requests your app serves, the bigger impact caching could have.
To see how much you currently rely on cached data, view the Memcache dashboard to see the ratio of cache hits to misses. If the ratio is high, using a data cache is likely to have a big impact on reducing your app's latency.
Confirming your app's region
If you need to cache data, make sure your app's region is supported by Memorystore for Redis and Serverless VPC Access:
View the region of your app, which appears near the top of the App Engine Dashboard in the Google Cloud console.
The region appears near the top of the page, just below your app's URL.
Confirm that your app is in one the regions supported by Serverless VPC Access.
Confirm that your that your app is in one the regions supported by Memorystore for Redis by visiting the Create connector page and viewing the regions in the Regions list.
If your app is not in a region that is supported by Memorystore for Redis and Serverless VPC Access:
Create a Google Cloud project.
Create a new App Engine app in the project and select a supported region.
Create the Google Cloud services that your app uses in the new project.
Alternatively, you can update your app to use the existing services in your old project, but pricing and resource use may be different when you use services in a different project and region. Refer to the documentation for each service for more information.
Deploy your app to the new project.
Understanding Datastore mode permissions
Every interaction with a Google Cloud service needs to be authorized. For example, to store or query data in a Datastore mode database, your app needs to supply the credentials of an account that is authorized to access the database.
By default your app supplies the credentials of the App Engine default service account, which is authorized to access databases in the same project as your app.
You will need to use an alternative authentication technique that explicitly provides credentials if any of the following conditions are true:
Your app and the Datastore mode database are in different Google Cloud projects.
You have changed the roles assigned to the default App Engine service account.
For information about alternative authentication techniques, see Setting up Authentication for Server to Server Production Applications.
Overview of the migration process
To migrate to Cloud NDB from App Engine NDB:
Install the Cloud NDB client library.
Update import statements to import modules from Cloud NDB.
Add code that creates a Cloud NDB client. The client can read your app's environment variables and use the data to authenticate with Datastore mode.
Add code that uses the client's runtime context to keep caching and transactions separate between threads.
Remove or update code that uses methods and properties that are no longer supported.
As with any change you make to your app, consider using traffic splitting to slowly ramp up traffic. Monitor the app closely for any database issues before routing more traffic to the updated app.
Updating your Python app
Installing the Cloud NDB library for Python 2 apps
Create a directory to store your third-party libraries, such as
requirements.txtfile in the same folder as your
app.yamlfile and add the name of the client library:
pip(version 6 or later) with the
-t <directory>flag to install the libraries into the folder you created in the previous step. For example:
pip install -t lib -r requirements.txt
Specify the RPC and
setuptoolslibraries in the
librariessection of your
libraries: - name: grpcio version: 1.0.0 - name: setuptools version: 36.6.0
appengine_config.pyfile in the same folder as your
app.yamlfile if you do not already have one. Add the following to your
# appengine_config.py import pkg_resources from google.appengine.ext import vendor # Set path to your libraries folder. path = 'lib' # Add libraries installed in the path folder. vendor.add(path) # Add libraries to pkg_resources working set to find the distribution. pkg_resources.working_set.add_entry(path)
Be sure to use the
pkg_resourcesmodule, which ensures that your app uses the right distribution of the client libraries.
appengine_config.pyfile in the preceding example assumes that the
libfolder is located in the current working directory. If you can't guarantee that
libwill always be in the current working directory, specify the full path to the
libfolder. For example:
import os path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'lib')
When you deploy your app, App Engine uploads all of the libraries in
the directory you specified in the
Installing the Cloud NDB library for Python 3 apps
App Engine's Python 3 runtime uses an app's
requirements.txt file to determine which packages and
versions to install for the apps. To install the Cloud NDB library in
the Python 3 runtime, add the following line to your app's
App Engine automatically uploads all libraries in the app's
requirements.txt file when you deploy the
Installing dependencies locally
When developing and testing your app locally, we highly recommend that you use a
virtual environment to isolate your app's dependencies from your system
packages. This ensures that your app only loads the dependencies that you
declared in your app's
requirements.txt file, and the app synchronizes
dependency versions between your local and production environments.
In Python 2, you can use virtualenv to create a virtual environment.
In Python 3, venv is the recommended way to create a virtual environment.
Updating import statements
The location of the NDB module has moved to
google.cloud.ndb. Update your app's import statements as shown
in the following table:
Creating a Cloud NDB Client
As with other client libraries that are based on Google Cloud APIs, the first
step in using Cloud NDB is to create a
object. The client contains credentials and other
data needed to connect to Datastore mode. For example:
from google.cloud import ndb client = ndb.Client()
In the default authorization scenario described previously, the Cloud NDB client contains credentials from App Engine's default service account, which is authorized to interact with Datastore mode. If you aren't working in this default scenario, see Application Default Credentials (ADC) for information on how to provide credentials.
Using the client's runtime context
In addition to providing the credentials needed to interact with
Datastore mode, the Cloud NDB client contains the
context() method which returns a runtime context.
The runtime context isolates caching and transaction requests from other
concurrent Datastore mode interactions.
All interactions with Datastore mode need to occur within an NDB runtime context. Since creating a model definition does not interact with Datastore mode, you can define your model class before creating a Cloud NDB client and retrieving a runtime context, and then use the runtime context in the request handler to get data from the database.
from google.cloud import ndb class Book(ndb.Model): title = ndb.StringProperty() client = ndb.Client() def list_books(): with client.context(): books = Book.query() for book in books: print(book.to_dict())
The runtime context that the Cloud NDB client returns only applies to a single thread. If your app uses multiple threads for a single request, you need to retrieve a separate runtime context for each thread that will use the Cloud NDB library.
Using a runtime context with WSGI frameworks
If your web app uses a WSGI framework, you can automatically create a new runtime context for every request by creating a middleware object that retrieves the runtime context, and then wrapping the app in the middleware object.
In the following example of using middleware with Flask:
middlewaremethod creates a WSGI middleware object within the runtime context of the NDB client.
The Flask app is wrapped in the middleware object.
Flask will then pass each request through the middleware object, which retrieves a new NDB runtime context for each request.
from flask import Flask from google.cloud import ndb client = ndb.Client() def ndb_wsgi_middleware(wsgi_app): def middleware(environ, start_response): with client.context(): return wsgi_app(environ, start_response) return middleware app = Flask(__name__) app.wsgi_app = ndb_wsgi_middleware(app.wsgi_app) # Wrap the app in middleware. class Book(ndb.Model): title = ndb.StringProperty() @app.route('/') def list_books(): books = Book.query() return str([book.to_dict() for book in books])
Using a runtime context with Django
The Django middleware provided by the App Engine NDB library
isn't supported by the Cloud NDB library. If you used this middleware
google.appengine.ext.ndb.django_middleware) in your app, follow these steps
to update your app:
Use Django's middleware system to create a new runtime context for every request.
In the following example:
ndb_django_middlewaremethod creates a Cloud NDB client.
middlewaremethod creates a middleware object within the runtime context of the NDB client.
from google.cloud import ndb # Once this middleware is activated in Django settings, NDB calls inside Django # views will be executed in context, with a separate context for each request. def ndb_django_middleware(get_response): client = ndb.Client() def middleware(request): with client.context(): return get_response(request) return middleware
In the Django settings.py file, update the
MIDDLEWAREsetting so it lists the new middleware you created instead of
Django will now pass each request through the middleware object you listed in
MIDDLEWARE setting, and this object will retrieve a new NDB runtime
context for each request.
Updating code for removed or changed NDB APIs
NDB APIs that rely on App Engine-specific APIs and services have either been updated or removed from the Cloud NDB library.
You will need to update your code if it uses any of the following NDB APIs:
google.appengine.ext.ndb.Modeland model properties
Models and Model properties
The following methods from
are not available in the Cloud NDB library because they use
App Engine-specific APIs that are no longer available.
The following table describes the specific
properties that have changed in the Cloud NDB library:
All properties with
The classes and methods in the following table are no longer available, because they use App Engine-specific resources that are no longer available.
If you try to create these objects, a
These methods rely on Datastore mode protocol buffers that have changed.
The following methods from
are not available in the Cloud NDB library. These methods were used to
pass keys to and from the DB Datastore API, which is no longer supported
(DB was the predecessor of App Engine NDB).
In addition, note the following changes:
|App Engine NDB||Cloud NDB|
|Kinds and string IDs must be less than 500 bytes||Kinds and string IDs must be less than 1500 bytes.|
The value returned by
Like App Engine NDB, Cloud NDB provides a
google.cloud.ndb.query.QueryOptions) that enables you to reuse a specific set of
query options instead of redefining them for each query. However,
in Cloud NDB doesn't inherit from
google.appengine.datastore.datastore_rpc.Configuration and therefore doesn't
google.appengine.datastore.datastore_query.Order has been
google.cloud.ndb.query.PropertyOrder. Similar to
enables you to specify the sort order across multiple queries. The
constructor is the same as the constructor for
Order. Only the name of the
class has changed.
See the source code for a description of these methods.
ndb.utils module (
google.appengine.ext.ndb.utils) is no longer
available. Most of the methods in that module were internal to
App Engine NDB, some methods have been discarded due to implementation
differences in the new ndb, while other methods have been made obsolete by new
Python 3 features. A small number of the methods remain available to aid
For example, the positional decorator in the old
utils module declared that only
the first n arguments of a function or method may be positional. However, Python
3 can do this using
keyword-only arguments. What used to be written as:
@utils.positional(2) def function1(arg1, arg2, arg3=None, arg4=None) pass
Can be written like this in Python 3:
def function1(arg1, arg2, *, arg3=None, arg4=None) pass
Namespaces enable a multitenant application to use separate silos of data for each tenant while still using the same Datastore mode database. That is, each tenant stores data under its own namespace.
Instead of using the App Engine-specific
you specify a default namespace when you create a Cloud NDB client and then use
the default namespace by calling Cloud NDB methods within the client's runtime
context. This follows the same pattern as other Google Cloud APIs that support
client=google.cloud.ndb.Client(namespace="my namespace") with client.context() as context: key = ndb.Key("SomeKind", "SomeId")or
key-non-default-namespace=ndb.Key("SomeKind," "AnotherId", namespace="non-default-nspace")
Tasklets can now use a standard
return statement to
return a result instead of raising a
Return exception. For example:
|App Engine NDB library||Cloud NDB library|
@ndb.tasklet def get_cart(): cart = yield CartItem.query().fetch_async() raise Return(cart)
@ndb.tasklet def get_cart(): cart = yield CartItem.query().fetch_async() return cart
Note that you can still return results in Cloud NDB by raising a
exception, but it's not recommended.
In addition, the following
Tasklets methods and subclasses are no longer
available, mainly because of changes in how an NDB context is created and used
in Cloud NDB library.
module in the Cloud NDB library contains many of the
same exceptions from the App Engine NDB library, not all of the old
exceptions are available in the new library. The following table lists the
exceptions that are no longer available:
Enabling data caching
Setting up Serverless VPC Access
Your app can only communicate with Memorystore through a Serverless VPC Access connector. To set up a Serverless VPC Access connector:
Setting up Memorystore for Redis
To set up Memorystore for Redis:
Create a Redis instance in Memorystore. When you're creating the instance:
Under Region, select the same region in which your App Engine app is located.
Under Authorized network, select the same network that your Serverless VPC Access connector is using.
Note the IP address and port number of the Redis instance you create. You will use this information when you enable data caching for Cloud NDB.
Be sure to use the
gcloud betacommand to deploy your app updates. Only the beta command can update your app to use a VPC connector.
Adding the Redis connection URL
You can connect to the Redis cache by adding the
REDIS_CACHE_URL environment variable to your app's
app.yaml file. The value
REDIS_CACHE_URL takes the following form:
redis://IP address for your instance:port
For example, you can add the following lines to your app's
env_variables: REDIS_CACHE_URL: redis://10.0.0.3:6379
Creating and using a Redis cache object
If you've set
REDIS_CACHE_URL as an environment variable, you can create a
RedisCache object with a single line of code, then use the cache by passing
Client.context() when you set up the runtime context:
client = ndb.Client() global_cache = ndb.RedisCache.from_environment() with client.context(global_cache=global_cache): books = Book.query() for book in books: print(book.to_dict())
If you don't set
REDIS_CACHE_URL as an environment variable, you'll need to
construct a Redis client and pass the client to the
constructor. For example:
global_cache = ndb.RedisCache(redis.StrictRedis(host=IP-address, port=redis_port))
Note that you don't need to declare a dependency on the Redis client library, since the Cloud NDB library already depends on the Redis client library.
Refer to the Memorystore sample application for an example of constructing a Redis client.
Testing your updates
To set up a test database and run your app locally before deploying it to App Engine:
Run the Datastore mode local emulator to store and retrieve data.
Make sure to follow the instructions for setting environment variables so your app connects to the emulator instead of the production Datastore mode environment.
You can also import data into the emulator if you want to start your test with data pre-loaded into the database.
Use the local development server to run your app.
To make sure the
GOOGLE_CLOUD_PROJECTenvironment variable is set correctly during local development, initialize
dev_appserverusing the following parameter:
PROJECT_ID should be your Google Cloud project ID. You can find your project ID by running the
gcloud config list projectcommand or looking at your project page in the Google Cloud console.
Deploying your app
Once your app is running in the local development server without errors:
If the app runs without errors, use traffic splitting to slowly ramp up traffic for your updated app. Monitor the app closely for any database issues before routing more traffic to the updated app.