App Engine Modules (or just "Modules" hereafter) let developers factor large applications into logical components that can share stateful services and communicate in a secure fashion. A deployed module behaves like a microservice. By using multiple modules you can deploy your app as a set of microservices, which is a popular design pattern.
For example, an app that handles customer requests might include separate modules to handle other tasks, such as:
- API requests from mobile devices
- Internal, admin-like requests
- Backend processing such as billing pipelines and data analysis
Modules can be configured to use different runtimes and to operate with different performance settings.
- Application hierarchy
- Scaling types and instance classes
- Using the development server with modules
- Uploading and deleting modules
- Instance states
- Instance uptime
- Background threads
- Monitoring resource usage
- Communication between modules
At the highest level, an App Engine application is made up of one or more modules. Each module consists of source code and configuration files. The files used by a module represent a version of the module. When you deploy a module, you always deploy a specific version of the module. For this reason, whenever we speak of a module, it usually means a version of a module.
You can deploy multiple versions of the same module, to account for alternative implementations or progressive upgrades as time goes on.
Each module and each version must have a name. A name can contain numbers, letters, and hyphens. It cannot be longer than 63 characters and cannot start or end with a hyphen. Choose a unique name for each module and each version. Don't reuse names between modules and versions.
While running, a particular module/version will have one or more instances. Each instance runs its own separate executable. The number of instances running at any time depends on the module's scaling type and the amount of incoming requests:
Stateful services (such as Memcache, Datastore, and Task Queues) are shared by all the modules in an application. Every module, version, and instance has its own unique URI (for example,
v1.my-module.my-app.appspot.com). Incoming user requests are routed to an instance of a particular module/version according to URL addressing conventions and an optional customized dispatch file.
Note: After April 2013 Google does not issue SSL certificates for
double-wildcard domains hosted at
you rely on such URLs for HTTPS access to your application, change any application logic to
-dot-" instead of "
.". For example, to access version v1 of application myapp use
https://v1-dot-myapp.appspot.com. The certificate will not match if you use
https://v1.myapp.appspot.com, and an error occurs for any
User-Agent that expects the URL and certificate to match exactly.
Scaling types and instance classes
When you upload a version of a module, the configuration file specifies a scaling type and instance class that apply to every instance of that version. The scaling type controls how instances are created. The instance class determines compute resources and pricing. There are three scaling types: manual, basic, and automatic. Available instance classes depend on the scaling type.
|Scaling Type||Description||Available Instance Classes|
|Manual Scaling||A module with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time.||B1, B2, B4, B4_1G, or B8|
|Basic Scaling||A module with basic scaling will create an instance when the application receives a request. The instance will be turned down when the app becomes idle. Basic scaling is ideal for work that is intermittent or driven by user activity.||B1, B2, B4, B4_1G, or B8|
|Automatic Scaling||Automatic scaling is based on request rate, response latencies, and other application metrics.||F1, F2, F4, or F4_1G|
This table summarizes the CPU, memory, and hourly billing rate of the various instance classes.
|Instance Class||Memory Limit||CPU Limit||Cost per Hour per Instance|
|B1||128 MB||600 Mhz||$0.05|
|B2||256 MB||1.2 Ghz||$0.10|
|B4||512 MB||2.4 Ghz||$0.20|
|B4_1G||1024 MB||2.4 Ghz||$0.30|
|B8||1024 MB||4.8 Ghz||$0.40|
|F1||128 MB||600 Mhz||$0.05|
|F2||256 MB||1.2 Ghz||$0.10|
|F4||512 MB||2.4 Ghz||$0.20|
|F4_1G||1024 MB||2.4 Ghz||$0.30|
Instances running in manual and basic scaling modules are billed at hourly rates based on uptime. Billing begins when an instance starts and ends fifteen minutes after a manual instance shuts down or fifteen minutes after a basic instance has finished processing its last request. Runtime overhead is counted against the instance memory limit. This will be higher for Java than for other languages.
Important: When you are billed for instance hours, you will not see any instance classes in your billing line items. Instead, you will see the appropriate multiple of instance hours. For example, if you use an F4 instance for one hour, you do not see "F4" listed, but you will see billing for four instance hours at the F1 rate.
This table compares the performance features of the three scaling types:
|Feature||Automatic Scaling||Manual Scaling||Basic Scaling|
|Deadlines||60-second deadline for HTTP requests, 10-minute deadline for tasks||Requests can run indefinitely. A manually-scaled instance can choose to handle
||Same as manual scaling.|
|Background Threads||Not allowed||Allowed||Allowed|
|Residence||Instances are evicted from memory based on usage patterns.||Instances remain in memory, and state is preserved across requests. When instances are restarted, an
||Instances are evicted based on the
|Startup and Shutdown||Instances are created on demand to handle requests and automatically turned down when idle.||Instances are sent a start request automatically by App Engine in the form of an empty GET request to
||Instances are created on demand to handle requests and automatically turned down when idle, based on the
|Instance Addressability||Instances are anonymous.||Instance "i" of version "v" of module "m" is addressable at the URL:
||Same as manual scaling.|
|Scaling||App Engine scales the number of instances automatically in response to processing volume. This scaling factors in the
||You configure the number of instances of each module version in that module’s configuration file. The number of instances usually corresponds to the size of a dataset being held in memory or the desired throughput for offline work. You can adjust the number of instances of a manually-scaled version very quickly, without stopping instances that are currently running, using the Modules API
||A basic scaling module version is configured with a maximum number of instances using the
|Free Daily Usage Quota||28 instance-hours||8 instance-hours||8 instance-hours|
ConfigurationEach version of a module is defined in a
.yamlfile, which gives the name of the module and version. The yaml file usually takes the same name as the module it defines, but this is not required. If you are deploying several versions of a module, you can create multiple yaml files in the same directory, one for each version.
Typically, you create a directory for each module, which contains the module's yaml file(s) and associated source code. Optional application-level configuration files (dispatch.yaml, cron.yaml, index.yaml, and queue.yaml) are included in the top level app directory. The example below shows three modules. In module1 the source files are contained in a subdirectory, in module2 they are at the same level as the yaml file; module3 has yaml files for two versions:
For small, simple projects, all the app's files can live in one directory:
Every yaml file must include a version parameter. To define the default module, you can explicitly include the parameter "module: default" or leave the module parameter out of the file.
Each module's configuration file defines the scaling type and instance class for a specific module/version. Different scaling parameters are used depending on which type of scaling you specify. If you do not specify scaling, automatic scaling is the default. The scaling and instance class settings are described in the App Config section.
For each module you can also specify settings that map URL requests to specific scripts and identify static files for better server efficiency. These settings are also included in the yaml file and are described in the App Config section.
The default module
Every application has a single default module. The default module is defined
by the standard
app.yaml file or by an
app.yaml with the setting
module: default. All configuration parameters relevant to modules can apply
to the default module.
Optional configuration filesThese configuration files control optional features that apply to all the modules in an app:
If you would like to update these files automatically during each deployment, put them in the top level
app directory and specify the app directory when you issue
appcfg.py update command.
If you place configuration files inside a module's directory (alongside the
app.yaml file for the module), they will be
updated only when you explicitly name that module's yaml file in the
update command. They will not be updated when you update the root app
directory that contains subfolders for each module.
You can also update configuration files individually using the special update
update_dos) and specifying the app directory, or by just naming the files themselves
Here is an example of how you would configure yaml files for an application that has three modules: a default module that handles web requests, plus two more modules that handle mobile requests and backend processing.
Start by defining a configuration file named
app.yaml that will handle all web-related requests:
application: simple-sample version: uno runtime: python27 api_version: 1 threadsafe: true
This configuration would create a default module with automatic scaling and a public address of
Next, assume that you want to create a module to handle mobile web requests. For the sake of the mobile users (in this example) the max pending latency will be just a second and we’ll always have at least two instances idle. To configure this you would create a
mobile-frontend.yaml configuration file. with the following contents:
application: simple-sample module: mobile-frontend version: uno runtime: python27 api_version: 1 threadsafe: true automatic_scaling: min_idle_instances: 2 max_pending_latency: 1s
The module this file creates would then be reachable at
Finally, add a module, called
my-module for handling static backend work. This could be a
continuous job that exports data from Datastore to BigQuery. The amount of work
is relatively fixed, therefore you simply need 1 resident module at any given
time. Also, these jobs will need to handle a large amount of in-memory
processing, thus you’ll want modules with an increased memory configuration. To
configure this you would create a
my-module.yaml configuration file with
the following contents.
application: simple-sample module: my-module version: uno runtime: python27 api_version: 1 threadsafe: true instance_class: B8 manual_scaling: instances: 1
The module this file creates would then be reachable at
manual_scaling: setting. The
instances: parameter tells App Engine how many instances to create for this module.
Using the development server with modulesIf you are testing locally with the development server (dev_appserver.py) and your app uses more than one module, you must append the names of all the modules' yaml files at the end of the command, rather than using the project directory. The default module must be the first module in the file list:
cd simple-sample dev_appserver.py app.yaml mobile-frontend.yaml my-module.yaml
Likewise, if you use the GoogleAppEngineLauncher application, open the Application Settings window and add the list of modules to the Launch Settings Extra Flags field.
Uploading and deleting modules
To deploy the example above, use the
If you are uploading the app for the first time, the default module must be uploaded
first, or if you are listing multiple modules, the default module must be the
first module in the file list:
cd simple-sample appcfg update app.yaml mobile-frontend.yaml my-module.yaml
You will receive verification via the command line as each module is successfully deployed.
Once the application has been successfully deployed you can access it at
http://simple-sample.appspot.com. You can also access each of the modules individually:
If you run multiple versions of a module, you can access a specific version by
prepending the version name to the URI. For example,
will target version uno of the default module. Routing
Requests to Modules explains addressing
module instances in detail.
To delete a module you must delete every version that's been uploaded. Use the Cloud Platform Console Versions page to delete versions.
A manual or basic scaled instance can be in one of two states:
Stopped. All instances of a particular module/version share the same state. You can change the state of all the instances belonging to a module/version using the
appcfg command or the Modules API.
Each module instance is created in response to a start request, which is an empty GET request to
/_ah/start. App Engine sends this request to bring an instance into existence; users cannot send a request to
/_ah/start. Manual and basic scaling instances must respond to the start request before they can handle another request. The start request can be used for two purposes:
- To start a program that runs indefinitely, without accepting further requests
- To initialize an instance before it receives additional traffic
Manual, basic, and automatically scaling instances startup differently. When you start a manual scaling instance, App Engine immediately sends a
/_ah/start request to each instance. When you start an instance of a basic scaling module, App Engine allows it to accept traffic, but the
/_ah/start request is not sent to an instance until it receives its first user request. Multiple basic scaling instances are only started as necessary, in order to handle increased traffic. Automatically scaling instances do not receive any
When an instance responds to the
/_ah/start request with an HTTP status code of
404, it is considered to have successfully started and can handle additional requests. Otherwise, App Engine terminates the instance. Manual scaling instances are restarted immediately, while basic scaling instances are restarted only when needed for serving traffic.
The shutdown process may be triggered by a variety of planned and unplanned events, such as:
- You manually stop an instance using the
appcfg stopcommand or the Modules API
- Manually stop an instance from the Cloud Platform Console Instances page.
- You update the module version using
- The instance exceeds the maximum memory for its configured
- Your application runs out of Instance Hours quota.
- The machine running the instance is restarted, forcing your instance to move to a different machine.
- App Engine needs to move your instance to a different machine to improve load distribution.
true. Second, if you have registered a shutdown hook, it will be called. It's a good idea to register a shutdown hook in your start request. After the notification is issued, existing requests are given 30 seconds to complete, and new requests immediately return
If an instance is handling a request, App Engine pauses the request and runs the shutdown hook. If there is no active request, App Engine sends an
/_ah/stop request, which runs the shutdown hook. The
/_ah/stop request bypasses normal handling logic and cannot be handled by user code; its sole purpose is to invoke the shutdown hook. If you raise an exception in your shutdown hook while handling another request, it will bubble up into the request, where you can
If you have enabled concurrent requests by specifying
threadsafe: true in
app.yaml (which is the default), raising an exception from a shutdown hook copies that exception to all threads. The following code sample demonstrates a basic shutdown hook:
from google.appengine.api import apiproxy_stub_map from google.appengine.api import runtime def my_shutdown_hook(): apiproxy_stub_map.apiproxy.CancelApiCalls() save_state() # May want to raise an exception runtime.set_shutdown_hook(my_shutdown_hook)
Alternatively, the following sample demonstrates how to use the is_shutting_down() method:
while more_work_to_do and not runtime.is_shutting_down(): do_some_work() save_state()
App Engine attempts to keep manual and basic scaling instances running indefinitely. However, at this time there is no guaranteed uptime for manual and basic scaling instances. Hardware and software failures that cause early termination or frequent restarts can occur without prior warning and may take considerable time to resolve; thus, you should construct your application in a way that tolerates these failures. The App Engine team will provide more guidance on expected instance uptime as statistics become available.
Here are some good strategies for avoiding downtime due to instance restarts:
- Use load balancing across multiple instances.
- Configure more instances than are normally required to handle your traffic patterns.
- Write fall-back logic that uses cached results when a manual scaling instance is unavailable.
- Reduce the amount of time it takes for your instances to start up and shutdown.
- Duplicate the state information across more than one instance.
- For long-running computations, checkpoint the state from time to time so you can resume it if it doesn't complete.
It's also important to recognize that the shutdown hook is not always able to run before an instance terminates. In rare cases, an outage can occur that prevents App Engine from providing 30 seconds of shutdown time. Thus, we recommend periodically checkpointing the state of your instance and using it primarily as an in-memory cache rather than a reliable data store.
Code running on a manual scaling instance can start a background thread that may outlive the request that spawns it. This allows instances to perform arbitrary periodic or scheduled tasks or to continue working in the background after a request has returned to the user.
A background thread's
os.environ and logging entries are independent of those of the spawning thread.
class is like the regular Python
threading.Threadclass, but can "outlive" the request that spawns it.
from google.appengine.api import background_thread def f(arg1, arg2, **kwargs): ...something useful... t = background_thread.BackgroundThread(target=f, args=["foo", "bar"]) t.start()
There is a function
start_new_background_thread() which creates a background
thread and starts it:
The maximum number of concurrent background threads is 10 per instance.
from google.appengine.api import background_thread def f(arg1, arg2, **kwargs): ...something useful... tid = background_thread.start_new_background_thread(f, ["foo", "bar"])
Monitoring resource usage
The Instances page of the Cloud Platform Console provides visibility into how instances are performing. By selecting your module and version in the dropdowns, you can see the memory and CPU usage of each instance, uptime, number of requests, and other statistics. You can also manually initiate the shutdown process for any instance.
You also can use the Runtime API to access statistics showing the CPU and memory usage of your instances. These statistics help you understand how resource usage responds to requests or work performed, and also how to regulate the amount of data stored in memory in order to stay below the memory limit of your instance class.
You can use the Logs API to access your
app's request and application logs. In particular, the
allows you to retrieve logs using various filters, such as request ID,
timestamp, module ID, and version ID.
Application (user/app-generated) logs are periodically flushed while manual and basic scaling instances handle requests; since modules can run on a request a long time, logs may not flush for a while. You can tune the flush settings, or force an immediate flush, using the Logs API. When a flush occurs, a new log entry is created at the time of the flush, containing any log messages that have not yet been flushed. These entries show up in the Cloud Platform Console Logs page marked with flush, and include the start time of the request that generated the flush.
Communication between modules
Modules can share state by using the Datastore and Memcache. They can collaborate by assigning work between them using Task Queues. To access these shared services, use the corresponding App Engine APIs. Calls to these APIs are automatically mapped to the application’s namespace.
The Modules API provides functions to retrieve the address of a module, a version, or an instance. This allows an application to send requests from one module, version, or instance to another module, version, or instance. This works in both the development and production environments. The Modules API also provides functions that return information about the current operating environment (module, version, and instance).
The following code sample shows how to get the module name and instance id for a request:
from google.appengine.api import modules module = modules.get_current_module_name() instance = modules.get_current_instance_id()
The instance ID of an automatic scaled module will be returned as a unique
base64 encoded value, e.g.
You can communicate between modules in the same app by fetching the hostname of the target module:
import urllib2 from google.appengine.api import modules url = "http://%s/" % modules.get_hostname(module="my-backend") try: result = urllib2.urlopen(url) doSomethingWithResult(result) except urllib2.URLError, e: handleError(e)
You can also use the URL Fetch service.
To be safe, the receiving module should validate that the request is coming from a valid client. You can check that the Inbound-AppId header or user-agent-string matches the app-id fetched with the AppIdentity service. Or you can use OAuth to authenticate the request.
You can configure any manual or basic scaling module to accept requests from other modules in your app by restricting its handler to only allow administrator accounts, specifying
login: admin for the appropriate handler in the module's configuration file. With this restriction in place, any URLFetch from any other module in the app will be automatically authenticated by App Engine, and any request that is not from the application will be rejected.
If you want a module to receive requests from anywhere, you must code your own secure solution as you would for any App Engine application. This is usually done by implementing a custom API and authentication mechanism.
The maximum number of modules and versions that you can deploy depends on your app's pricing:
|Limit||Free App||Paid App|
|Maximum Modules Per App||5||20|
|Maximum Versions Per App||15||60|
There is also a limit to the number of instances for each module with basic or manual scaling:
|Maximum Instances per Manual/Basic Scaling Version|
|Free App||Paid App US||Paid App EU|