Jump to Content

A dozen reasons why Cloud Run complies with the Twelve-Factor App methodology

July 22, 2019
Katie McLaughlin

Senior Developer Relations Engineer

With the recent release of Cloud Run, it's now even easier to deploy serverless applications on Google Cloud Platform (GCP) that are automatically provisioned, scaled up, and scaled down. But in a serverless world, being able to ensure your service meets the twelve factors is paramount. 

The Twelve-Factor App denotes a paradigm that, when followed, should make it frictionless for you to scale, port, and maintain web-based software as a service. The more factors your environment has, the better.

So, on a scale of 1 to 12, just how twelve-factor compatible is Cloud Run? Let’s take the factors, one by one. 

The Twelve Factors



One codebase tracked in revision control, many deploys

Each service you intend to deploy on Cloud Run should live in its own repository, whatever your choice of source control software. When you want to deploy your service, you need to build the container image, then deploy it. 

For building your container image, you can use Docker, or Cloud Build, GCP's own build system. 

You can even supercharge your deployment story by integrating Build Triggers, so any time you, say, merge to master, your service builds, pushes, and deploys to production.

You can also deploy an existing container image as long as it listens on a PORT, or find one of the many sporting a shiny Deploy on Cloud Run button. 


Explicitly declare and isolate dependencies

Since Cloud Run is a bring-your-own container environment, you can declare whatever you want in it, and the container encapsulates the entire environment. Nothing escapes, so two containers won't conflict with one another. 

When you need to declare dependencies, these can be captured using environment variables, keeping your service stateless.

It is important to note that there are some limitations to what you can put into a Cloud Run container due to environment sandboxing, and what ports can be used (which we'll cover in Section VII.)


Store config in the environment

Yes, Cloud Run stores configuration in the environment by default. 

Your code goes in your container, and configuration is captured in the specification of your Service. Configuration includes, for example, the amount of memory, CPU or any environment variables. These can be declared when you create the service, in the Optional Settings. 

Don't worry if you miss setting environment variables when you create your service. You can always edit them again by clicking "+ Deploy New Revision" when viewing your service, or by using  --update-env-vars flag in gcloud beta run deploy

Each revision you deploy is not editable, which means revisions are reproducible, as the configuration is frozen. To make changes you must deploy a new revision. 

For bonus points, consider using berglas, which leverages Cloud KMS and Cloud Storage to secure your environment variables. It works out of the box with Cloud Run (and the repo even comes with multiple language examples).


Treat backing services as attached resources

Much like you would connect to any external database in a containerized environment, you can connect to a plethora of different hosts in the GCP universe.

And since your service cannot have any internal state, to have any state you must use a backing service.


Strictly separate build and run stages

Having separate build and run stages is how you deploy in Cloud Run land! If you set up your Continuous Deployment back in Section I, then you’ve already automated that step. 

If you haven't, building a new version of your Cloud Run service is as easy as building your container image: 

gcloud builds submit --tag gcr.io/YOUR_PROJECT/YOUR_IMAGE 

to take advantage of Cloud Build, and deploying the built container image: 

gcloud beta run deploy --image gcr.io/YOUR_PROJECT/YOUR_IMAGE YOUR SERVICE

Cloud Run creates a new revision of the service, ensures the container starts, and then re-routes traffic to this new revision for you. If for any reason your container image encounters an error, the service is still active with the old version, and no downtime occurs. 

You can also create a continuous deployment by configuring Cloud Run automations using Cloud Build triggers, further streamlining your build, release, and run process. 


Execute the app as one or more stateless processes

Each Cloud Run service runs its own container, and each container should have one process. If you need multiple concurrent processes, separate those out into different services, and use a stateful backing service (Section IV) to communicate between them. 


Export services via port binding

Cloud Run follows modern architecture best practices and each Service must expose itself on a port number, specified by the PORT environment variable

This is the fundamental design of Cloud Run: any container you want, as long as it listens on port 8080.

Cloud Run does support outbound gRPC and WebSockets, but does not currently work with these protocols inbound.


Scale out via the process model

Concurrency is a first-class factor in Cloud Run. You declare the maximum number of concurrent requests your container can receive. Using this maximum and other factors, Cloud Run will automatically scale by adding more container instances to handle incoming requests.


Maximize robustness with fast startup and graceful shutdown

Since Cloud Run handles scaling for you, it's in your best interest to ensure your services are the most efficient they can be. The faster they are to start up, the more seamless scaling can be. 

There are a number of tips around how to write effective services, so be sure to consider the size of your containers, the time they take to start up, and how gracefully they handle errors without terminating. 


Keep development, staging, and production as similar as possible

A container-based development workflow means that your local machine can be the development environment, and Cloud Run can be your production environment! Even if you're running on a non-Linux environment, a local Docker container should behave in the same way as the same container running elsewhere. It's always a good idea to test your container locally when developing. Testing locally helps you achieve a more efficient iterative development strategy, allowing you to work more effectively. To ensure that you get the same port-binding behavior as Cloud Run in production, make sure you run with a port flag: 

PORT=8080 && docker run -p 8080:${PORT} -e PORT=${PORT} gcr.io/[PROJECT_ID]/[IMAGE]

When testing locally, consider if you're using any GCP external services, and ensure you point Docker to the authentication credentials

Once you've confirmed your service is sound, you can deploy the same container to a staging environment, and after confirming it's working as intended there, to a production environment. 

A GCP Project can host many services, so it's recommended that your staging and production (or green and blue, or however you wish to call your isolated environments) are separate projects. This also ensures isolation between databases across environments. 


Treat logs as event streams

Cloud Run uses Stackdriver Logging out of the box. The "Logs" tab on your Cloud Run service view shows you what's going on under the covers, including log aggregation across all dynamically created instances. Stackdriver Logging automatically captures stdout and stderr, and there may also be a native client for Logging in your preferred programming language. 

In addition, since logs are captured in Stackdriver Logging, you can then use the Stackdriver Logging tools to further work with your logs; for example, exporting them to BigQuery.


Run admin/management tasks as one-off processes

Administration tasks are outside the scope of Cloud Run. If you need to do any project configuration, database administration, or other management changes, you can perform these tasks using the GCP Console, gcloud CLI, or Cloud Shell

A near-perfect score, as a matter of fact(or)

With the exception of one factor being outside of scope, Cloud Run maps near perfectly with Twelve-Factor, which means it will map well to scalable, manageable infrastructure for your next serverless deployment. To learn more about Cloud Run, check out this quickstart

Posted in