Cloud Load Balancing: A comprehensive solution for secure and private access to Cloud Run services
Anusheel Pareek
Product Manager, Google Cloud
We are excited to announce General Availability of Cloud Run services as backends to Internal HTTP(S) Load Balancers. This new capability allows you to establish private connectivity between Cloud Run services and other services and clients on Google Cloud, on-premises, or on other clouds. Using an internal load balancer with Cloud Run provides benefits that include using custom domain names, migrating traffic from legacy services, guarding access based on identity and context, and configuring a single internal layer-7 load balancer for multiple services.
As part of this launch, we are also announcing General Availability of Cloud Run services as backends to Regional External HTTP(S) Load Balancers.
About Cloud Run
Cloud Run is a fully managed serverless compute environment that runs containers. Cloud Run completely eliminates the need for users to manage infrastructure, including provisioning, configuring, and managing Kubernetes clusters and/or VMs. As a developer, you can focus on what matters most to your business — writing applications — without worrying about infrastructure. Besides providing a better developer experience, Cloud Run has a number of other key benefits that make it a preferred serverless environment for enterprises.
About Internal HTTP(S) Load Balancing
Google Cloud Load Balancing is a fully managed service that helps your applications reach planet scale, no matter where you deploy your workloads — cloud or on-prem — while supporting your high availability and security requirements.
Internal HTTP(S) Load Balancer is Google Cloud’s proxy-based, Layer 7 load balancer that enables you to run and scale your services behind an internal IP address. This load balancer is one of our next-generation load balancers that support advanced traffic management capabilities out-of-the-box, including traffic mirroring, weight-based traffic splitting, and request/response-based header transformations, giving you fine-grained control over how traffic is handled. Our load balancers are built on the open-source Envoy Proxy, which allows you to extend your traffic management across Google Cloud, other clouds, or on-premises.
The benefits of Internal HTTP(S) Load Balancing for your Cloud Run services
Securely and privately access Cloud Run services via the internal load balancer
While Cloud Run does not run in a VPC, by using the Internal HTTP(S) Load Balancer, you can allow private access (using private IP addresses) to Cloud Run services from clients in a VPC network, or even clients from other networks that are peered to the load balancer’s network. Thus, other services running on Google Cloud VMs can access your Cloud Run service privately. Similarly, by using VPN tunnels or interconnect attachments, you can allow your on-prem or other cloud services to access your Cloud Run deployments.
Configure a custom domain name
You can use the Internal HTTP(S) Load Balancer to set up a custom domain that your private clients can use to access your Cloud Run service, rather than the default URL address that Cloud Run provides for a deployed service. You can specify the custom domain name or URL for your Cloud Run service while configuring the routing rules for Internal HTTP(S) Load Balancer and share them with other service owners or users who need to access your service. Read more about how the load balancer can route traffic based on domain names, URL paths, and more here.
Control who has access using identity-aware proxy
Identity-aware proxy (IAP) helps you verify user identity and use context to determine if a user should be granted access to your Cloud Run service. IAP can be enabled at the internal load balancer’s backend service and helps you centralize all access control configurations. Further, by using the restrictive ingress setting and VPC service control perimeters, you can make your Cloud Run service accessible only via the internal load balancer and resources within the secure VPC perimeter.
Migrate traffic from your legacy services
Internal HTTP(S) Load Balancing supports weighted traffic splitting, which allows percentage-based traffic splits across multiple backends. For example, you can start sending a small share of your production traffic, say 1%, to a new version of your service on Cloud Run, and continue sending the remaining traffic to the previous version of the service. Once you validate that your Cloud Run service is working as expected, you can gradually and with confidence start shifting more traffic to it until 100% of the traffic is shifted. You can also use weighting traffic splitting to shift traffic away from your on-prem services, to accelerate your cloud migration journey.
In addition to weighted traffic splitting, Internal HTTP(S) Load Balancing also supports other advanced traffic management capabilities such as routing traffic to a specific service using HTTP request parameters, injecting faults to simulating failures, and more. Read more about these capabilities here.
Use the same load balancer to distribute traffic to multiple service types
Using the layer-7 routing capabilities of Internal HTTP(S) Load Balancing, you can configure multiple services as backends to the same load balancer. These services can run not just in Cloud Run, but also in Compute Engine, Google Kubernetes Engine, or even on-prem. This approach helps you consolidate all your routing rules policies in one place and simplifies network management by allowing you to associate a single set of host name, IP addresses, and SSL certificates with multiple services.
Further, using cross-project service referencing you can use one central load balancer and route traffic to hundreds of services distributed across multiple different Google Cloud projects. This allows you to achieve separation of roles for your functional teams — service owners can focus on building services in their own projects, while network teams can provision and maintain load balancers in other projects, as illustrated in the image below.
Regional External HTTP(S) Load Balancing for your Cloud Run services
While it has been possible to deploy Cloud Run with Global External HTTP(S) Load Balancing to enable connectivity for external clients, you now have the option of using a regional external load balancer as well. The Regional External load balancer, as the name suggests, is designed to reside in a single region and connect with workloads only in the same region, thus helping you meet your regionalization requirements (e.g., adhere to localization compliance norms for your business). Further, the Regional External load balancer supports the Standard Network Service Tier, which allows you to optimize for cost-sensitive workloads if you're willing to trade-off some network performance.
How do I get started?
Cloud Run services can be specified as load balancer’s backends by using Serverless network endpoint groups.
Read our setup guides to deploy the load balancer for your Cloud Run services: