Instances are the basic building blocks of App Engine, providing all the resources needed to successfully host your application. This includes the language runtime, the App Engine APIs, and your application's code and memory. Each instance includes a security layer to ensure that instances cannot inadvertently affect each other.
Instances are the computing units that App Engine uses to automatically scale your application. At any given time, your application can be running on one instance or many instances, with requests being spread across all of them.
Introduction to instances
Instances are resident or dynamic. A dynamic instance starts up and shuts down automatically based on the current needs. A resident instance runs all the time, which can improve your application's performance.
An instance instantiates the code which is included in an App Engine service.
Configuring your app includes specifying how its services scale, including:
- The initial number of instances for a service.
- How new instances are created or stopped in response to traffic.
- The allotted amount of time in which an instance is allowed to handle a request.
The scaling class that you assign to a service determines the kinds of instances that are created:
- Manual scaling services use resident instances.
- Basic scaling services use dynamic instances.
- Auto scaling services use dynamic instances; however, if you specify a number, N, of minimum idle instances, the first N instances will be resident, and additional dynamic instances will be created as necessary.
App Engine charges for instance usage on an hourly basis. You can track your instance usage on the Google Cloud Platform Console Instances page. If you want to set a limit on incurred instance costs, you can do so by setting a spending limit.
Scaling dynamic instances
App Engine applications are powered by any number of dynamic instances at a given time, depending on the volume of incoming requests. As requests for your application increase, so do the number of dynamic instances.
The App Engine scheduler decides whether to serve each new request with an existing instance (either one that is idle or accepts concurrent requests), put the request in a pending request queue, or start a new instance for that request. The decision takes into account the number of available instances, how quickly your application has been serving requests (its latency), and how long it takes to spin up a new instance.
Each instance has its own queue for incoming requests. App Engine monitors the number of requests waiting in each instance's queue. If App Engine detects that queues for an application are getting too long due to increased load, it automatically creates a new instance of the application to handle that load.
App Engine also scales instances in reverse when request volumes decrease. This scaling helps ensure that all of your application's current instances are being used to optimal efficiency and cost effectiveness.
When an application is not being used at all, App Engine turns off its associated dynamic instances, but readily reloads them as soon as they are needed. Reloading instances can result in loading requests and additional latency for users.
You can specify a minimum number of idle instances. Setting an appropriate number of idle instances for your application based on request volume allows your application to serve every request with little latency, unless you are experiencing abnormally high request volume.
Instance life cycle
An instance of an auto-scaled service is always running. However, an instance of
a manual or basic scaled service can be either running or stopped. All instances
of the same service and version, share the same state.
You can change the state of your instances by stopping your versions,
either by using the Versions page in the
Cloud Platform Console, the
gcloud app versions start
gcloud app versions stop
commands, or the Modules API.
Each service instance is created in response to a start request, which is an
GET request to
/_ah/start. App Engine sends this request to bring
an instance into existence; users cannot send a request to
and basic scaling instances must respond to the start request before they can
handle another request. The start request can be used for two purposes:
- To start a program that runs indefinitely, without accepting further requests.
- To initialize an instance before it receives additional traffic.
Manual, basic, and automatically scaling instances startup differently. When you
start a manual scaling instance, App Engine immediately sends a
request to each instance. When you start an instance of a basic scaling service,
App Engine allows it to accept traffic, but the
/_ah/start request is not sent
to an instance until it receives its first user request. Multiple basic scaling
instances are only started as necessary, in order to handle increased traffic.
Automatically scaling instances do not receive any
When an instance responds to the
/_ah/start request with an HTTP status code
404, it is considered to have successfully started and can
handle additional requests. Otherwise, App Engine terminates the instance.
Manual scaling instances are restarted immediately, while basic scaling
instances are restarted only when needed for serving traffic.
The shutdown process might be triggered by a variety of planned and unplanned events, such as:
- You manually stop an instance.
- You deploy an updated version to the service.
- The instance exceeds the maximum memory for its configured
- Your application runs out of Instance Hours quota.
- Your instance is moved to a different machine, either because the current machine that is running the instance is restarted, or App Engine moved your instance to improve load distribution.
true. Second (and preferred), you can register a shutdown hook, as described below.
When App Engine begins to shut down an instance, existing requests are given 30
seconds to complete, and new requests immediately return
404. If an instance
is handling a request, App Engine pauses the request and runs the shutdown hook.
If there is no active request, App Engine sends an
/_ah/stop request, which
runs the shutdown hook. The
/_ah/stop request bypasses normal handling logic
and cannot be handled by user code; its sole purpose is to invoke the shutdown
hook. If you raise an exception in your shutdown hook while handling another
request, it will bubble up into the request, where you can catch it.
If you have enabled concurrent requests by specifying
threadsafe: true in
app.yaml (which is the default), raising an exception from a shutdown hook
copies that exception to all threads. The following code sample demonstrates a
basic shutdown hook:
from google.appengine.api import apiproxy_stub_map from google.appengine.api import runtime def my_shutdown_hook(): apiproxy_stub_map.apiproxy.CancelApiCalls() save_state() # May want to raise an exception runtime.set_shutdown_hook(my_shutdown_hook)
Alternatively, the following sample demonstrates how to use the
while more_work_to_do and not runtime.is_shutting_down(): do_some_work() save_state()
When App Engine creates a new instance for your application, the instance must first load any libraries and resources required to handle the request. This happens during the first request to the instance, called a Loading Request. During a loading request, your application undergoes initialization which causes the request to take longer.
The following best practices allow you to reduce the duration of loading requests:
- Load only the code needed for startup.
- Access the disk as little as possible.
- In some cases, loading code from a zip or jar file is faster than loading from many separate files.
Warmup requests are a specific type of loading request that load application code into an instance ahead of time, before any live requests are made. To learn more about how to use warmup requests, see Warmup Requests.
App Engine attempts to keep manual and basic scaling instances running indefinitely. However, at this time there is no guaranteed uptime for manual and basic scaling instances. Hardware and software failures that cause early termination or frequent restarts can occur without prior warning and can take considerable time to resolve; thus, you should construct your application in a way that tolerates these failures. The App Engine team will provide more guidance on expected instance uptime as statistics become available.
Here are some good strategies for avoiding downtime due to instance restarts:
- Use load balancing across multiple instances.
- Configure more instances than are normally required to handle your traffic patterns.
- Write fall-back logic that uses cached results when a manual scaling instance is unavailable.
- Reduce the amount of time it takes for your instances to start up and shutdown.
- Duplicate the state information across more than one instance.
- For long-running computations, checkpoint the state from time to time so you can resume it if it doesn't complete.
In general, instances are charged per-minute for their uptime in addition to a 15-minute startup fee. Billing begins when the instance starts and ends fifteen minutes after the instance shuts down (see Pricing for details). You will be billed only for idle instances up to the number of maximum idle instances that you set for each service. Runtime overhead is counted against the instance memory . This will be higher for Java applications than Python.
Billing is slightly different in resident and dynamic instances:
- For resident instances, billing ends fifteen minutes after the instance is shut down.
- For dynamic instances, billing ends fifteen minutes after the last request has finished processing.