By default, Cloud Run instances are only allocated CPU during request processing, container startup and shutdown. (Refer to instance lifecycle). You can change this behavior so CPU is always allocated and available even when there are no incoming requests. Setting the CPU to be always allocated can be useful for running short-lived background tasks and other asynchronous processing tasks.
Even if CPU is always allocated, Cloud Run autoscaling is still in effect, and may terminate instances if they aren't needed to handle incoming traffic. An instance will never stay idle for more than 15 minutes after processing a request unless it is kept active using minimum instances.
Combining CPU always allocated with a number of minimum instances results in a number of instances up and running with full access to CPU resources, enabling background processing use cases. When using this pattern, keep in mind that Cloud Run only applies instance autoscaling if a service is handling requests. The CPU usage of instances that are not handling requests is effectively ignored by Cloud Run.
If you use healthcheck probes, CPU is allocated for every probe. See container healthcheck probes for billing details.
Pricing impact
If you choose the CPU to be allocated only during request processing, you are charged per request and only when the instance processes a request. If you choose the CPU always allocated setting, you are charged for the entire lifecycle of the instance. See the Cloud Run pricing tables for details.
Google's Recommender automatically looks at traffic received by your Cloud Run service over the past month, and will recommend switching from CPU allocated during requests to CPU always allocated, if this is cheaper.
How to choose the appropriate CPU allocation
Choosing the appropriate CPU allocation for your use case depends on several factors, such as traffic patterns, background execution, and cost, each of which is described in the following sections.
Traffic patterns considerations
- CPU only allocated during request processing is recommended when incoming traffic is sporadic, bursty or spiky.
- CPU always allocated is recommended when incoming traffic is steady, slowly varying.
Background execution considerations
Selecting CPU always allocated allows you to execute short-lived background tasks and other asynchronous processing work after returning responses. For example:
- Leveraging monitoring agents like OpenTelemetry that may assume to be able to run in the background.
- Using Go's Goroutines, Node.js async, Java threads, and Kotlin coroutines.
- Using application frameworks that rely on built-in scheduling/background functionalities.
Idle instances, including those kept warm using minimum instances, can be shut down at any time. If you need to finish outstanding tasks before the container is terminated, you can trap SIGTERM to give a instance 10 seconds grace time before it is stopped.
Consider using Cloud Tasks for executing asynchronous tasks. Cloud Tasks automatically retries failed tasks and supports running times up to 30 minutes.
Cost considerations
If you are currently using CPU only allocated during request processing, CPU always allocated is probably more economical if:
- Your Cloud Run service is processing high number of current requests at a rather steady rate.
- You do not see a lot of "idle" instances when looking at the instance count metric.
You can use the pricing calculator to estimate cost differences.
Autoscaling considerations
Both CPU only allocated during request processing and CPU always allocated are supposed to be used for request-driven services.
Cloud Run will only scale out when CPU utilization during request processing exceeds 60%.
If you select CPU always allocated and perform background activities without requests, Cloud Run will not scale out even if CPU usage is over the 60% threshold and in some cases, an instance might become too busy to accept incoming requests.
Set and update CPU allocation
Any configuration change leads to the creation of a new revision. Subsequent revisions will also automatically get this configuration setting unless you make explicit updates to change it.
If you are choosing the always-allocated CPU option, you must specify at least 512MiB of memory.
By default, CPU is only allocated during request processing for each container instance. You can change this using the Google Cloud console, the gcloud command line, or a YAML file when you create a new service or deploy a new revision:
Console
In the Google Cloud console, go to Cloud Run:
Click Create Service if you are configuring a new service you are deploying to. If you are configuring an existing service, click on the service, then click Edit and deploy new revision.
If you are configuring a new service, fill out the initial service settings page as desired, then click Container(s), volumes, networking, security to expand the service configuration page.
Click the Container tab.
- Select the desired CPU allocation under CPU allocation and pricing. Select CPU is only allocated during request processing for your instances to receive CPU only when they are receiving requests. Select CPU is always allocated to allocate CPU for the entire lifetime of instances.
Click Create or Deploy.
Command line
You can update the CPU allocation. To set CPUs to be always allocated for a given service:
gcloud run services update SERVICE --no-cpu-throttling
Replace SERVICE with the name of your service.
To set CPU allocation only during request processing:
gcloud run services update SERVICE --cpu-throttling
You can also set CPU allocation during deployment. To set CPUs to be always allocated:
gcloud run deploy --image IMAGE_URL --no-cpu-throttling
To set CPU allocation only during request processing:
gcloud run deploy --image IMAGE_URL --cpu-throttling
Replace IMAGE_URL with a reference to the container image, for
example, us-docker.pkg.dev/cloudrun/container/hello:latest
. If you use Artifact Registry,
the repository REPO_NAME must
already be created. The URL has the shape REGION-docker.pkg.dev/PROJECT_ID/REPO_NAME/PATH:TAG
.
YAML
You can download and view existing service configurations using the
gcloud run services describe --format export
command, which yields
cleaned results in YAML format.
You can then modify the fields described below and
upload the modified YAML using the gcloud run services replace
command.
Make sure you only modify fields as documented.
To view and download the configuration:
gcloud run services describe SERVICE --format export > service.yaml
Update the
cpu
attribute:apiVersion: serving.knative.dev/v1 kind: Service metadata: name: SERVICE spec: template: metadata: annotations: run.googleapis.com/cpu-throttling: 'BOOLEAN' name: REVISION
Replace
- SERVICE with the name of your Cloud Run service
- BOOLEAN with
true
to set CPU allocation only during request processing, orfalse
to set CPU to always allocated. - REVISION with a new revision name or delete it (if present). If you supply a new revision name, it must meet the following criteria:
- Starts with
SERVICE-
- Contains only lowercase letters, numbers and
-
- Does not end with a
-
- Does not exceed 63 characters
- Starts with
Replace the service with its new configuration using the following command:
gcloud run services replace service.yaml
Terraform
To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.
Add the following to a google_cloud_run_v2_service
resource in your Terraform
configuration, under template.containers.resources
.
View CPU allocation settings
To view the current CPU allocation settings for your Cloud Run service:
Console
In the Google Cloud console, go to Cloud Run:
Click the service you are interested in to open the Service details page.
Click the Revisions tab.
In the details panel at the right, the CPU allocation setting is listed under the Container tab.
Command line
Use the following command:
gcloud run services describe SERVICE
Locate the CPU allocation setting in the returned configuration.