Security design overview

Learn how Cloud Run implements security best practices to protect your data and explore how to use these features to meet your security requirements.

Architecture

Cloud Run runs on top of Borg in the same environment where Google deploys billions of containers a week, hosting some of the biggest sites in the world, including Gmail and YouTube. Because Cloud Run components share the same infrastructure, they are built to the same security standards as other Google services.

To learn more about our approach to security, read the Google security overview whitepaper.

The Cloud Run architecture contains many different infrastructure components. The following diagram shows how these components respond to requests to your service and Cloud Run Admin API calls:

Diagram of Cloud Run infrastructure components
Figure 1. Diagram of Cloud Run infrastructure components.

Requests to your service

When a request is made to your Cloud Run service either through your custom domain or directly to your run.app URL, the request is handled by the following components:

  • Google Front End (GFE): the Google global infrastructure service that terminates TLS connections and applies protections against DoS attacks when you make a request to the run.app URL. Cloud Run is a regional service, so when a request is accessed via the run.app URL, the GFE forwards the request to Cloud Run in the appropriate region.
  • Google Cloud load balancer: when you set up Cloud Load Balancing to handle your custom domain, it includes the GFE functionality mentioned previously. You can also configure Google Cloud load balancers to perform additional functions, such as traffic management and access control.
  • HTTP proxy: a zonal component that load balances incoming HTTP requests to the instances of your sandboxed applications.
  • Scheduler: selects the app servers to host instances of your sandboxed applications.
  • App server: a zonal and multi-tenant compute node that creates and manages the sandboxes that run the instances of each application's container.
  • Sandbox: isolates user code from the system and other customers. Learn more in the following Compute security section.
  • Storage: exposes a file server interface for container images imported from supported container registries.
  • Metadata server: provides sandbox-specific credentials and metadata.
  • Outbound networking: manages outbound traffic initiated by the sandbox.

Cloud Run Admin API calls

When a request is made to the Cloud Run Admin API, the request is handled by the following components:

  • Google Front End (GFE): the Google global infrastructure service that terminates TLS connections and applies protections against DoS attacks.
  • Control plane: validates and writes your application configurations to storage.
  • Config storage: stores application configurations in Spanner and Bigtable for access by other components, such as app server, scheduler, and networking elements.

Compute security

Cloud Run components run on Google's container-management system, Borg. For your containers, Cloud Run offers two execution environments:

  • First generation: Based on the gVisor container security platform, this option has a small codebase, which provides a smaller attack surface. Every change is security-reviewed and most changes are written in a memory-safe manner. Further hardening is achieved by using Secure Computing Mode (seccomp) system call filtering.

  • Second generation: Based on Linux microVMs, this option provides more compatibility and performance for custom workloads. Further hardening is achieved by using seccomp system call filtering and Sandbox2 Linux namespaces.

Both of these execution environments use two layers of sandboxing consisting of a hardware-backed layer equivalent to individual VMs (x86 virtualization) and a software kernel layer, as shown in the following diagram:

In both execution
      environments, the user container is isolated from other workloads through
      two layers of sandboxing.
Figure 2. In both execution environments, the user container is isolated from other workloads through two layers of sandboxing.

If your service makes use of third-party infrastructure for securing containers, use the second generation execution environment.

Data encryption and storage

Cloud Run instances are stateless. Terminating an instance discards its state. Therefore, all new instances are started from a clean slate.

If you have stateful data, you can manage your data in the following ways:

Beyond that, Cloud Run integrates with many other Google Cloud systems to manage and access your data in the following ways:

Across Google Cloud, all your data is encrypted at rest.

Cloud Run complies with Google Cloud-wide initiatives for data protection and transparency, including access transparency and data residency.

Network security

Cloud Run and all other Google Cloud services encrypt all traffic in transit. You can incorporate both egress and ingress controls into your Cloud Run services or jobs to add an additional layer of restriction. Organization administrators can also enforce egress and ingress by setting organization policies.

Egress (outbound) traffic

Egress traffic that exits from Cloud Run is treated as transport layer 4 (TCP and UDP).

By default, egress traffic takes one of the following paths when exiting Cloud Run:

  • Target destination is in VPC network: traffic travels to a VPC network or Shared VPC network in your project by using Direct VPC egress or a Serverless VPC Access connector. The connector is a regional resource that sits directly on the VPC network.
  • Target destination is not in VPC network: traffic routes directly to the target destination within Google's network or the public internet.
Diagram of Cloud Run infrastructure components
Figure 3. Egress traffic can be proxied through a connector to a VPC network. It can also travel directly to a VPC or to a non-VPC network (Preview).

Controlling egress

For additional control over egress traffic, use the VPC egress setting to route all your traffic to your VPC network by using Direct VPC egress or connectors.

Once it's on the VPC network, you can use VPC tools to manage the traffic, for example:

Organization administrators can also enforce egress by setting the Allowed VPC egress settings (Cloud Run) list constraint.

Ingress (inbound) traffic

In contrast to egress, Cloud Run's ingress traffic is at application layer 7 (HTTP).

Cloud Run accepts incoming ingress traffic from the following sources:

  • Public internet: requests are routed directly from public sources to your Cloud Run services with the option of routing traffic through an external HTTP(S) load balancer.

  • VPC network: you can route traffic from a VPC network to Cloud Run services by using Private Google Access, Private Service Connect, or an internal Application Load Balancer. Traffic of this type always stays within Google's network.

  • Google Cloud services: traffic travels directly to Cloud Run from other Google Cloud services, such as BigQuery or even Cloud Run itself. In some cases, you can also configure these services to route through a VPC network. Traffic of this type always stays within Google's network.

Diagram of Cloud Run infrastructure components
Figure 4. Layer 7 HTTP network ingress (inbound) traffic to Cloud Run.

Cloud Run's network security model includes the following ingress traffic properties:

  • Direct traffic to the run.app URL: the run.app URL always requires HTTPS for traffic to enter Cloud Run. Google's frontend-serving infrastructure terminates TLS and then forwards the traffic to Cloud Run and to your container through an encrypted channel.
  • Traffic to a custom domain associated with your Google Cloud load balancer: For HTTPS traffic, Google Cloud internal and external load balancers terminate TLS and forward traffic to Cloud Run and your container through an encrypted channel. Google Cloud load balancers also let you apply additional security features such as IAP, Google Cloud Armor, and SSL policies.

For more information about configuring VPC network traffic to Cloud Run, see Receive requests from VPC networks.

Controlling ingress

Cloud Run ingress controls manage what traffic enters Cloud Run to ensure that traffic comes only from trusted sources.

For Cloud Run services that serve only internal clients, you can configure the "internal" setting so that only traffic from the following internal sources can enter Cloud Run:

  • VPC networks in your project or VPC Service Controls perimeter, including Cloud Run services that route all their traffic through the VPC network.
  • The Shared VPC network that the Cloud Run service is attached to.
  • Some Google Cloud services, such as BigQuery, that are in your project or VPC Service Controls perimeter.
  • Traffic from on-premises clients that traverse your VPC network to reach Cloud Run.

Organization administrators can also enforce ingress by setting organization policies.

For more information about controlling ingress, see Restricting ingress for Cloud Run.

Access control

Access controls are used to restrict who has access to your Cloud Run services and jobs.

Who can manage your service or job

To control who manages your Cloud Run service or job, Cloud Run uses IAM for authorizing users and service accounts.

What your service or job can access

To control what your Cloud Run workloads can reach via the network, you can force all traffic through the VPC network and apply VPC firewall rules, as previously described in the Network security section.

If you're using Direct VPC egress, you can attach network tags to the Cloud Run resource and reference the network tags in the firewall rule. If you're using Serverless VPC Access, you can apply firewall rules to the connector instances.

Use IAM to control what resources your Cloud Run service or job can access. Services and jobs use the Compute Engine default service account by default. For sensitive workloads, use a dedicated service account so that you can grant only the permissions that the workload needs to do its work. Learn more about using per-service identity to manage a dedicated service account. For information about how Cloud Run reminds users to create a dedicated service account, see Secure Cloud Run services with Recommender.

Who can invoke your service or execute your job

Cloud Run provides several different options to control who invokes your service or executes your job.

Ingress controls

For managing ingress of Cloud Run services at the networking level, see Controlling ingress in the previous section.

Cloud Run jobs do not serve requests and therefore do not use ingress controls when executing jobs.

IAM for your service

Cloud Run performs an IAM check on every request.

Use the run.routes.invoke permission to configure who can access your Cloud Run service in the following ways:

  • Grant the permission to select service accounts or groups to allow access to the service. All requests must have an HTTP Authorization header containing an OpenID Connect ID token signed by Google for one of the authorized service accounts.

  • Grant the permission to all users to allow unauthenticated access.

To ensure that only members of your organization can invoke a Cloud Run service, an organization administrator can set the Domain restricted sharing organization policy. Organization administrators can also opt out specific Cloud Run services. Learn how to create public Cloud Run services when domain restricted sharing is enforced.

Learn more about common use cases for authentication and how Cloud Run uses access control with IAM.

Load balancer security features for your service

If you configured a Cloud Run service to be a backend to a Google Cloud load balancer, secure this path using the following methods:

IAM for your job

Use the run.jobs.run permission to configure who can execute your Cloud Run job in the following ways:

  • Grant the permission to select service accounts or groups to allow access to the job. If the job is triggered by another service, such as Cloud Scheduler, the service account that's used must have the run.jobs.run permission on the job.

  • Grant the permission to the logged-in user to execute a job from the Google Cloud console. If the job is triggered by another service, such as Cloud Scheduler, the service account or group that's used must have the run.jobs.run permission on the job.

To ensure that only members of your organization can execute a Cloud Run job, an organization administrator can set the Domain restricted sharing constraint. Organization administrators can also opt out specific Cloud Run jobs.

VPC Service Controls

Your Cloud Run services can be part of a VPC Service Controls perimeter so that you can leverage VPC Service Controls to gate access and mitigate exfiltration risk. Learn more about using VPC Service Controls.

Supply chain security

Google Cloud's buildpacks-managed base images

Services that are deployed from source code using Google Cloud's buildpacks are built using Google-provided base images. Google maintains these base images and provides routine patches on a weekly basis. In emergency situations involving critical security vulnerabilities, we're able to make patches available within hours.

Cloud Run internal supply chain security

Because it runs on Borg, Cloud Run implements all of the same supply chain security that's standard across all Google services, such as Gmail and YouTube. Read more about Google internal supply chain practices in the BeyondProd and Binary Authorization for Borg whitepapers.

Binary Authorization

Cloud Run has built-in support for Binary Authorization to ensure that only trusted container images are deployed on Cloud Run. Read more at Set up overview for Cloud Run.

Software supply chain security insights

Cloud Administrators can view security information about the supply chain of their deployed containers directly from a panel in the Google Cloud console. Read more at View software supply chain security insights.

Execution environment security

Cloud Run supports automatic base image updates with compatible containers. Security updates are applied with zero downtime on the service by executing a rebase on the container base image.

Services that are deployed from source, which include Cloud Run utilize Google Cloud's buildpacks and are compatible with automatic security updates.

Services with automatic security updates enabled are deployed using Google-provided base images. Google maintains these base images and provides routine patches after a period of stability testing. In emergency situations involving critical security vulnerabilities, we're able to make patches available within hours.

To learn more about execution environment security updates, see how to configure security updates.

What's next

For an end-to-end walkthrough of how to set up networking, see the Cloud Run Serverless Networking Guide.