Setting up an internal HTTP(S) load balancer with Cloud Run

This document shows you how to deploy Internal HTTP(S) Load Balancing with Cloud Run. To set this up, you use a serverless NEG backend for the load balancer.

Before you try this procedure, make sure you are familiar with the following topics:

Serverless NEGs let you use Cloud Run services with your load balancer. After you configure a load balancer with the serverless NEG backend, requests to the load balancer are routed to the Cloud Run backend.

Before you begin

  1. Install Google Cloud CLI.
  2. Deploy a Cloud Run service.
  3. Configure permissions.

Install Google Cloud SDK

Install the gcloud command-line tool. See gcloud overview for conceptual and installation information about the tool.

If you haven't run the gcloud command-line tool previously, first run gcloud init to initialize your gcloud directory.

Note: You cannot use the Cloud console to set up an internal HTTP(S) load balancer with a serverless NEG backend.

Deploy a Cloud Run service

The instructions on this page assume you already have a Cloud Run service running.

For the example on this page, you can use any of the Cloud Run quickstarts to deploy a Cloud Run service.

The serverless NEG, the load balancer, and any client VMs must be in the same region as the Cloud Run service.

To prevent access to the Cloud Run service from the Internet, restrict ingress to internal. Traffic from the internal HTTP(S) load balancer is considered internal traffic.

gcloud run deploy CLOUD_RUN_SERVICE_NAME \
  --platform=managed \
  --allow-unauthenticated \
  --ingress=internal \
  --region=REGION \
  --image=IMAGE_URL

Note the name of the service that you create. The rest of this page shows you how to set up a load balancer that routes requests to this service.

Configure permissions

To follow this guide, you need to create a serverless NEG and create a load balancer in a project. You should be either a project owner or editor, or you should have the following Compute Engine IAM roles:

Task Required role
Create load balancer and networking components Network Admin
Create and modify NEGs Compute Instance Admin
Create and modify SSL certificates Security Admin

Configure the network and subnets

gcloud

  1. Create the custom VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network. This example uses an IP address range of 10.1.2.0/24 for the subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create lb-subnet \
    --network=lb-network \
    --range=10.1.2.0/24 \
    --region=REGION
    
  3. Create a VM. This step is required because there must be at least one VM in the VPC network in which you intend to set up a regional load balancer with a serverless backend. If you already have a VM in the network, you don't need to perform this step.

    gcloud compute instances create test-vm \
    --network=lb-network \
    --subnet=lb-subnet \
    --zone=ZONE
    
  4. If you don't already have one, create a proxy-only subnet for all Envoy-based load balancers (internal HTTP(S) load balancers and regional external HTTP(S) load balancers) in the REGION region of the lb-network network.

    Create the proxy-only subnet with the gcloud compute networks subnets create command.
    This example uses an IP address range of 10.129.0.0/23 for the proxy-only subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create proxy-only-subnet \
    --purpose=REGIONAL_MANAGED_PROXY \
    --role=ACTIVE \
    --region=REGION \
    --network=lb-network \
    --range=10.129.0.0/23
    

Create the load balancer

In the following diagram, the load balancer uses a serverless NEG backend to direct requests to a serverless Cloud Run service.

Internal HTTP(S) load balancing architecture for a Cloud Run application.
Internal HTTP(S) load balancing architecture for a Cloud Run application.

Traffic going from the load balancer to the serverless NEG backends uses special routes defined outside your VPC that are not subject to firewall rules. Therefore, if your load balancer only has serverless NEG backends, you don't need to create firewall rules to allow traffic from the proxy-only subnet to the serverless backend.

gcloud

  1. Create a serverless NEG for your Cloud Run service:
    gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME \
        --region=REGION \
        --network-endpoint-type=serverless  \
        --cloud-run-service=CLOUD_RUN_SERVICE_NAME
    
  2. Create a regional backend service. Set the --protocol to either HTTP or HTTPS.
    gcloud compute backend-services create BACKEND_SERVICE_NAME \
        --load-balancing-scheme=INTERNAL_MANAGED \
        --protocol=HTTP \
        --region=REGION
    
  3. Add the serverless NEG as a backend to the backend service:
    gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
        --region=REGION \
        --network-endpoint-group=SERVERLESS_NEG_NAME \
        --network-endpoint-group-region=REGION
    
  4. Create a regional URL map to route incoming requests to the backend service:
    gcloud compute url-maps create URL_MAP_NAME \
        --default-service=BACKEND_SERVICE_NAME \
        --region=REGION
    
    This example URL map only targets one backend service representing a single serverless app, so you don’t need to set up host rules or path matchers.
  5. To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS target proxy. You can create an SSL certificate resource by using a self-managed SSL certificate. Google-managed certificates are not supported. To create a regional self-managed SSL certificate resource:
    gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
        --certificate CRT_FILE_PATH \
        --private-key KEY_FILE_PATH \
        --region=REGION
    
  6. Create a regional target proxy to route requests to the URL map.

    For an HTTP load balancer, create an HTTP target proxy:
    gcloud compute target-http-proxies create TARGET_HTTP_PROXY_NAME \
        --url-map=URL_MAP_NAME \
        --region=REGION
    
    For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.
    gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \
        --ssl-certificates=SSL_CERTIFICATE_NAME \
        --url-map=URL_MAP_NAME \
        --region=REGION
    
  7. Create a forwarding rule to route incoming requests to the proxy. Do not use the proxy-only subnet for the forwarding rule IP address. You can configure any valid IP address from the subnet (lb-subnet). This example uses `10.1.2.99` for the forwarding rule IP address.

    For an HTTP load balancer:
    gcloud compute forwarding-rules create HTTP_FORWARDING_RULE_NAME \
        --load-balancing-scheme=INTERNAL_MANAGED \
        --network=lb-network \
        --subnet=lb-subnet \
        --address=10.1.2.99 \
        --target-http-proxy=TARGET_HTTP_PROXY_NAME \
        --target-http-proxy-region=REGION \
        --region=REGION \
        --ports=80
    
    For an HTTPS load balancer:
    gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \
        --load-balancing-scheme=INTERNAL_MANAGED \
        --network=lb-network \
        --subnet=lb-subnet \
        --address=10.1.2.99 \
        --target-https-proxy=TARGET_HTTPS_PROXY_NAME \
        --target-https-proxy-region=REGION \
        --region=REGION \
        --ports=443
    

Testing the load balancer

Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address.

Create a client VM

This example creates a client VM (vm-client) in the same region as the load balancer. The client is used to validate the load balancer's configuration and demonstrate expected behavior.

gcloud

The client VM can be in any zone in the same REGION as the load balancer, and it can use any subnet in the same VPC network.

gcloud compute instances create vm-client \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --tags=allow-ssh \
    --network=lb-network \
    --subnet=lb-subnet \
    --zone=ZONE

Configure the firewall rule

This example requires the following firewall rule for the test client VM:

fw-allow-ssh. An ingress rule, applicable to the test client VM, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP address range for this rule; for example, you can specify just the IP address ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh.

gcloud

  1. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    

Send traffic to the load balancer

It might take a few minutes for the load balancer configuration to propagate after you first deploy it.

  • Connect via SSH to the client instance.

    gcloud compute ssh vm-client \
      --zone=ZONE
    
  • Verify that the load balancer is serving the Cloud Run service homepage as expected.

    For HTTP testing, run:

    curl 10.1.2.99
    

    For HTTPS testing, run:

    curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:10.1.2.99:443
    

    The -k flag causes curl to skip certificate validation.

Additional configuration options

This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.

Using a URL mask

When creating a serverless NEG, instead of selecting a specific Cloud Run service, you can use a URL mask to point to multiple services serving at the same domain. A URL mask is a template of your URL schema. The serverless NEG will use this template to extract the service name from the incoming request's URL and map the request to the appropriate service.

URL masks are particularly useful if your service is mapped to a custom domain rather than the default address that Google Cloud provides for the deployed service. A URL mask allows you to target multiple services and versions with a single rule even when your application is using a custom URL pattern.

If you haven't already done so, make sure you read the Serverless NEGS overview: URL Masks.

Construct a URL mask

To construct a URL mask for your load balancer, start with the URL of your service. This example uses a sample serverless app running at https://example.com/login. This is the URL where the app's login service will be served.

  1. Remove the http or https from the URL. You are left with example.com/login.
  2. Replace the service name with a placeholder for the URL mask.
    1. Cloud Run: Replace the Cloud Run service name with the placeholder <service>. If the Cloud Run service has a tag associated with it, replace the tag name with the placeholder <tag>. In this example, the URL mask you are left with is, example.com/<service>.
  3. (Optional) If the service name can be extracted from the path portion of the URL, the domain can be omitted. The path part of the URL mask is distinguished by the first / character. If a / is not present in the URL mask, the mask is understood to represent the host only. Therefore, for this example, the URL mask can be reduced to /<service>.

    Similarly, if the service name can be extracted from the host part of the URL, you can omit the path altogether from the URL mask.

    You can also omit any host or subdomain components that come before the first placeholder as well as any path components that come after the last placeholder. In such cases, the placeholder captures the required information for the component.

Here are a few more examples that demonstrate these rules:

This table assumes that you have a custom domain called example.com and all your Cloud Run services are being mapped to this domain.

Service, Tag name Cloud Run custom domain URL URL mask
service: login https://login-home.example.com/web <service>-home.example.com
service: login https://example.com/login/web example.com/<service> or /<service>
service: login, tag: test https://test.login.example.com/web <tag>.<service>.example.com
service: login, tag: test https://example.com/home/login/test example.com/home/<service>/<tag> or /home/<service>/<tag>
service: login, tag: test https://test.example.com/home/login/web <tag>.example.com/home/<service>

Creating a serverless NEG with a URL mask

gcloud

To create a serverless NEG with a sample URL mask of example.com/<service>:

gcloud compute network-endpoint-groups create SERVERLESS_NEG_MASK_NAME \
    --region=REGION \
    --network-endpoint-type=serverless \
    --cloud-run-url-mask="example.com/<service>"

Deleting a serverless NEG

A network endpoint group cannot be deleted if it is attached to a backend service. Before you delete a NEG, ensure that it is detached from the backend service.

gcloud

To remove a serverless NEG from a backend service, you must specify the region where the NEG was created.

gcloud compute backend-services remove-backend BACKEND_SERVICE_NAME \
    --network-endpoint-group=SERVERLESS_NEG_NAME \
    --network-endpoint-group-region=REGION \
    --region=REGION

To delete the serverless NEG:

gcloud compute network-endpoint-groups delete SERVERLESS_NEG_NAME \
    --region=REGION

What's next