Set up an internal HTTP(S) load balancer with Cloud Run

Stay organized with collections Save and categorize content based on your preferences.

This document shows you how to deploy Internal HTTP(S) Load Balancing with Cloud Run. To set this up, you use a serverless NEG backend for the load balancer.

Before you try this procedure, make sure you are familiar with the following topics:

Serverless NEGs let you use Cloud Run services with your load balancer. After you configure a load balancer with the serverless NEG backend, requests to the load balancer are routed to the Cloud Run backend.

Before you begin

  1. Install Google Cloud CLI.
  2. Deploy a Cloud Run service.
  3. Configure permissions.

Install Google Cloud SDK

Install the gcloud command-line tool. See gcloud overview for conceptual and installation information about the tool.

If you haven't run the gcloud command-line tool previously, first run gcloud init to initialize your gcloud directory.

Note: You cannot use the Google Cloud console to set up an internal HTTP(S) load balancer with a serverless NEG backend.

Deploy a Cloud Run service

The instructions on this page assume you already have a Cloud Run service running.

For the example on this page, you can use any of the Cloud Run quickstarts to deploy a Cloud Run service.

The serverless NEG, the load balancer, and any client VMs must be in the same region as the Cloud Run service.

To prevent access to the Cloud Run service from the Internet, restrict ingress to internal. Traffic from the internal HTTP(S) load balancer is considered internal traffic.

gcloud run deploy CLOUD_RUN_SERVICE_NAME \
  --platform=managed \
  --allow-unauthenticated \
  --ingress=internal \
  --region=REGION \
  --image=IMAGE_URL

Note the name of the service that you create. The rest of this page shows you how to set up a load balancer that routes requests to this service.

Configure permissions

To follow this guide, you need to create a serverless NEG and create a load balancer in a project. You should be either a project owner or editor, or you should have the following Compute Engine IAM roles:

Task Required role
Create load balancer and networking components Network Admin
Create and modify NEGs Compute Instance Admin
Create and modify SSL certificates Security Admin

Configure the network and subnets

To configure the network and its subnets, you must first create a VPC network, create a VM instance in a specific subnet, and then create a proxy-only subnet.

Create the VPC network

Create a custom mode VPC network, then the subnets that you want within a region. Finally, define the firewall rules for your network.

Console

  1. Go to the VPC networks page in the Google Cloud console.

    Go to VPC networks

  2. Click Create VPC network.

  3. In the Name field, enter lb-network.

  4. In the Subnet creation mode field, select Custom.

  5. In the New subnet section, specify the following configuration parameters for a subnet:

    1. Provide a Name for the subnet.
    2. Select a Region.
    3. Enter an IP address range, such as 10.1.2.0/24. For more information, see the primary IPv4 range.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. Click Done.

  6. In the Firewall rules section, select predefined firewall rules in the IPv4 firewall rules tab. These predefined rules address common use cases for connectivity to instances.

    Each predefined rule name starts with the name of the VPC network that you are creating.

    1. Optional: You can edit the lb-network-allow-custom rule. On the right side of the row that contains the rule, click Edit to select subnets, add additional IPv4 ranges, and specify protocols and ports.

    If you add additional subnets later, the lb-network-allow-custom firewall rule is not automatically updated. If you need firewall rules for the new subnets, you must update the firewall configuration to add the rules.

    If you don't select any predefined rules, you can create your own firewall rules after you create the network.

  7. Select Dynamic routing mode for the VPC network. You can change the dynamic routing mode later.

    For more information, see dynamic routing mode.

  8. In the Maximum transmission unit (MTU) field, select 1460 (default) or 1500.

    Review the maximum transmission unit overview before setting the MTU to 1500.

  9. Click Create.

gcloud

  1. Create the custom VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network. This example uses an IP address range of 10.1.2.0/24 for the subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create lb-subnet \
    --network=lb-network \
    --range=10.1.2.0/24 \
    --region=REGION
    

Create a VM instance in a specific subnet

There must be at least one VM in the VPC network in which you intend to set up a regional load balancer with a serverless backend. If you already have a VM in the network, then you don't have to perform this step.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Select your project and click Continue.

  3. Click Create instance.

  4. Specify a Name for your VM as test-vm. For more information, see Resource naming convention.

  5. Optional: Change the Zone for this VM. Compute Engine randomizes the list of zones within each region to encourage use across multiple zones.

  6. In the Firewall section, select Allow HTTP traffic or Allow HTTPS traffic to permit HTTP or HTTPS traffic to the VM.

    The Google Cloud console adds a network tag to your VM and creates the corresponding ingress firewall rule that allows all incoming traffic on tcp:80 (HTTP) or tcp:443 (HTTPS).

    The network tag associates the firewall rule with the VM. For more information, see Firewall rules overview in the Virtual Private Cloud documentation.

  7. Expand the Networking, disks, security, management, sole tenancy section.

    1. Expand the Networking section.
    2. For Network interfaces, specify the network details:
      1. In the Network field, select the VPC network that contains the subnet you created, such as lb-network.
      2. In the Subnet field, select the subnet for the VM to use, such as lb-subnet.
      3. Click Done.
  8. To create and start the VM, click Create.

gcloud

  1. Create a VM.

    gcloud compute instances create test-vm \
    --network=lb-network \
    --subnet=lb-subnet \
    --zone=ZONE
    

Create a proxy-only subnet

Create a proxy-only subnet for all regional Envoy-based load balancers in a specific region of the lb-network network.

Console

  1. In the Google Cloud console, go to the VPC networks page.
    Go to VPC networks
  2. Click the name of the Shared VPC network that you want to add the proxy-only subnet to.
  3. Click Add subnet.
  4. In the Name field, enter proxy-only-subnet.
  5. Select a Region.
  6. Set Purpose to Regional Managed Proxy.
  7. Enter an IP address range as 10.129.0.0/23.
  8. Click Add.

gcloud

  1. Create the proxy-only subnet with the gcloud compute networks subnets create command.
    This example uses an IP address range of 10.129.0.0/23 for the proxy-only subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create proxy-only-subnet \
     --purpose=REGIONAL_MANAGED_PROXY \
     --role=ACTIVE \
     --region=REGION \
     --network=lb-network \
     --range=10.129.0.0/23
    

Create the load balancer

In the following diagram, the load balancer uses a serverless NEG backend to direct requests to a serverless Cloud Run service.

Internal HTTP(S) load balancing architecture for a Cloud Run application.
Internal HTTP(S) load balancing architecture for a Cloud Run application.

Traffic going from the load balancer to the serverless NEG backends uses special routes defined outside your VPC that are not subject to firewall rules. Therefore, if your load balancer only has serverless NEG backends, you don't need to create firewall rules to allow traffic from the proxy-only subnet to the serverless backend.

Console

Start the configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Under HTTP(S) load balancing, click Start configuration.
  3. Under Internet facing or internal only, select Only between my VMs or serverless services.
  4. Keep the window open to continue.

Configure the frontend

  1. Before you proceed, make sure you have an SSL certificate.
  2. Click Frontend configuration.
  3. Enter a Name.
  4. To configure an internal HTTP(S) load balancer, fill in the fields as follows.
    1. Under Protocol, select HTTPS.
    2. Under Subnetwork, select the subnetwork.
    3. Under IP version, select IPv4.
    4. Under IP address, select Ephemeral.
    5. Under Port, select 443.
    6. Under Certificate, select an existing SSL certificate or create a new certificate.

      To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS proxy. You can create an SSL certificate resource using a self-managed SSL certificate.
      Google-managed certificates are not supported.

    If you want to test this process without setting up an SSL certificate resource, you can set up an HTTP load balancer.

  5. Optional: To create an HTTP load balancer, do the following:
    1. Under Protocol, select HTTP.
    2. Under Subnetwork, select the subnetwork.
    3. Under IP version, select IPv4.
    4. Under IP address, select Ephemeral.
    5. Under Port, select 80.
  6. Click Done.

Configure the backend services

  1. Click Backend configuration.
  2. In the Create or select backend services drop-down menu, hold the pointer over Backend services, and then select Create a backend service.
  3. In the Create a backend service window, enter a Name.
  4. Under Backend type, select Serverless network endpoint group (Preview).
  5. Leave Protocol unchanged. This parameter is ignored.
  6. Under Backends > New backend, select Create serverless network endpoint group.
    1. In the Create serverless network endpoint group window, enter a Name.
    2. Under Region, the region of the load balancer is displayed.
    3. From the Serverless network endpoint group type field, select Cloud Run. Cloud Run is the only supported type.
    4. Select Select service name.
    5. From the Service drop-down list, select the Cloud Run service that you want to create a load balancer for.
    6. Click Done.
    7. Click Create.
  7. In the Create backend service window, click Create.

Configure routing rules

Routing rules determine how your traffic is directed. You can direct traffic to a backend service or a Kubernetes service. Any traffic not explicitly matched with a host and path matcher is sent to the default service.

  1. Click Simple host and path rule.
  2. Select a backend service from the Backend drop-down list.

Review the configuration

  1. Click Review and finalize.
  2. Review the values for Backend, Host and Path rules and Frontend.
  3. Click Create. Wait for the load balancer to be created.
  4. Click the name of the load balancer (serverless-lb).
  5. Note the IP address of the load balancer for the next task.

gcloud

  1. Create a serverless NEG for your Cloud Run service:
        gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME \
            --region=REGION \
            --network-endpoint-type=serverless  \
            --cloud-run-service=CLOUD_RUN_SERVICE_NAME
        
  2. Create a regional backend service. Set the --protocol to either HTTP or HTTPS.
        gcloud compute backend-services create BACKEND_SERVICE_NAME \
            --load-balancing-scheme=INTERNAL_MANAGED \
            --protocol=HTTP \
            --region=REGION
        
  3. Add the serverless NEG as a backend to the backend service:
        gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
            --region=REGION \
            --network-endpoint-group=SERVERLESS_NEG_NAME \
            --network-endpoint-group-region=REGION
        
  4. Create a regional URL map to route incoming requests to the backend service:
        gcloud compute url-maps create URL_MAP_NAME \
            --default-service=BACKEND_SERVICE_NAME \
            --region=REGION
        
    This example URL map only targets one backend service representing a single serverless app, so you don’t need to set up host rules or path matchers.
  5. To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS target proxy. You can create an SSL certificate resource by using a self-managed SSL certificate. Google-managed certificates are not supported. To create a regional self-managed SSL certificate resource:
        gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
            --certificate CRT_FILE_PATH \
            --private-key KEY_FILE_PATH \
            --region=REGION
        
  6. Create a regional target proxy to route requests to the URL map.

    For an HTTP load balancer, create an HTTP target proxy:
        gcloud compute target-http-proxies create TARGET_HTTP_PROXY_NAME \
            --url-map=URL_MAP_NAME \
            --region=REGION
        
    For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.
        gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \
            --ssl-certificates=SSL_CERTIFICATE_NAME \
            --url-map=URL_MAP_NAME \
            --region=REGION
        
  7. Create a forwarding rule to route incoming requests to the proxy. Do not use the proxy-only subnet for the forwarding rule IP address. You can configure any valid IP address from the subnet (lb-subnet). This example uses `10.1.2.99` for the forwarding rule IP address.

    For an HTTP load balancer:
        gcloud compute forwarding-rules create HTTP_FORWARDING_RULE_NAME \
            --load-balancing-scheme=INTERNAL_MANAGED \
            --network=lb-network \
            --subnet=lb-subnet \
            --address=10.1.2.99 \
            --target-http-proxy=TARGET_HTTP_PROXY_NAME \
            --target-http-proxy-region=REGION \
            --region=REGION \
            --ports=80
        
    For an HTTPS load balancer:
        gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \
            --load-balancing-scheme=INTERNAL_MANAGED \
            --network=lb-network \
            --subnet=lb-subnet \
            --address=10.1.2.99 \
            --target-https-proxy=TARGET_HTTPS_PROXY_NAME \
            --target-https-proxy-region=REGION \
            --region=REGION \
            --ports=443
        

Testing the load balancer

Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address.

Create a client VM

This example creates a client VM (vm-client) in the same region as the load balancer. The client is used to validate the load balancer's configuration and demonstrate expected behavior.

gcloud

The client VM can be in any zone in the same REGION as the load balancer, and it can use any subnet in the same VPC network.

gcloud compute instances create vm-client \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --tags=allow-ssh \
    --network=lb-network \
    --subnet=lb-subnet \
    --zone=ZONE

Configure the firewall rule

This example requires the following firewall rule for the test client VM:

fw-allow-ssh. An ingress rule, applicable to the test client VM, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP address range for this rule; for example, you can specify just the IP address ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh.

gcloud

  1. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    

Send traffic to the load balancer

It might take a few minutes for the load balancer configuration to propagate after you first deploy it.

  • Connect via SSH to the client instance.

    gcloud compute ssh vm-client \
      --zone=ZONE
    
  • Verify that the load balancer is serving the Cloud Run service homepage as expected.

    For HTTP testing, run:

    curl 10.1.2.99
    

    For HTTPS testing, run:

    curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:10.1.2.99:443
    

    The -k flag causes curl to skip certificate validation.

Additional configuration options

This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.

Using a URL mask

When creating a serverless NEG, instead of selecting a specific Cloud Run service, you can use a URL mask to point to multiple services serving at the same domain. A URL mask is a template of your URL schema. The serverless NEG will use this template to extract the service name from the incoming request's URL and map the request to the appropriate service.

URL masks are particularly useful if your service is mapped to a custom domain rather than the default address that Google Cloud provides for the deployed service. A URL mask allows you to target multiple services and versions with a single rule even when your application is using a custom URL pattern.

If you haven't already done so, make sure you read the Serverless NEGS overview: URL Masks.

Construct a URL mask

To construct a URL mask for your load balancer, start with the URL of your service. This example uses a sample serverless app running at https://example.com/login. This is the URL where the app's login service will be served.

  1. Remove the http or https from the URL. You are left with example.com/login.
  2. Replace the service name with a placeholder for the URL mask.
    1. Cloud Run: Replace the Cloud Run service name with the placeholder <service>. If the Cloud Run service has a tag associated with it, replace the tag name with the placeholder <tag>. In this example, the URL mask you are left with is, example.com/<service>.
  3. (Optional) If the service name can be extracted from the path portion of the URL, the domain can be omitted. The path part of the URL mask is distinguished by the first / character. If a / is not present in the URL mask, the mask is understood to represent the host only. Therefore, for this example, the URL mask can be reduced to /<service>.

    Similarly, if the service name can be extracted from the host part of the URL, you can omit the path altogether from the URL mask.

    You can also omit any host or subdomain components that come before the first placeholder as well as any path components that come after the last placeholder. In such cases, the placeholder captures the required information for the component.

Here are a few more examples that demonstrate these rules:

This table assumes that you have a custom domain called example.com and all your Cloud Run services are being mapped to this domain.

Service, Tag name Cloud Run custom domain URL URL mask
service: login https://login-home.example.com/web <service>-home.example.com
service: login https://example.com/login/web example.com/<service> or /<service>
service: login, tag: test https://test.login.example.com/web <tag>.<service>.example.com
service: login, tag: test https://example.com/home/login/test example.com/home/<service>/<tag> or /home/<service>/<tag>
service: login, tag: test https://test.example.com/home/login/web <tag>.example.com/home/<service>

Creating a serverless NEG with a URL mask

Console

For a new load balancer, you can use the same end-to-end process as described previously in this topic. When configuring the backend service, instead of selecting a specific service, enter a URL mask.

If you have an existing load balancer, you can edit the backend configuration and have the serverless NEG point to a URL mask instead of a specific service.

To add a URL mask-based serverless NEG to an existing backend service, do the following:

  1. In the Google Cloud console, go to the Load balancing page.
    Go to Load balancing
  2. Click the name of the load balancer that has the backend service you want to edit.
  3. On the Load balancer details page, click Edit .
  4. On the Edit HTTP(S) load balancer page, click Backend configuration.
  5. On the Backend configuration page, click Edit for the backend service you want to modify.
  6. Click Add backend.
  7. Select Create Serverless network endpoint group.
    1. For the Name, enter helloworld-serverless-neg.
    2. Under Region, the region of the load balancer is displayed.
    3. Under Serverless network endpoint group type, Cloud Run is displayed. Currently, Cloud Run is the only supported network endpoint group type.
      1. Select Use URL Mask.
      2. Enter a URL mask. For instructions on how to create a URL mask, see Constructing a URL mask.
      3. Click Create.

  8. In the New backend, click Done.
  9. Click Update.

gcloud

To create a serverless NEG with a sample URL mask of example.com/<service>:

gcloud compute network-endpoint-groups create SERVERLESS_NEG_MASK_NAME \
    --region=REGION \
    --network-endpoint-type=serverless \
    --cloud-run-url-mask="example.com/<service>"

Deleting a serverless NEG

A network endpoint group cannot be deleted if it is attached to a backend service. Before you delete a NEG, ensure that it is detached from the backend service.

Console

  1. To make sure the serverless NEG you want to delete is not currently in use by any backend service, go to the Backend services tab in the Load balancing components page.
    Go to Backend services
  2. If the serverless NEG is currently in use, do the following:
    1. Click the name of the backend service using the serverless NEG.
    2. Click Edit .
    3. From the list of Backends, click to remove the serverless NEG backend from the backend service.
    4. Click Save.
  3. Go to the Network endpoint group page in the Google Cloud console.
    Go to Network endpoint group
  4. Select the checkbox for the serverless NEG you want to delete.
  5. Click Delete.
  6. Click Delete again to confirm.

gcloud

To remove a serverless NEG from a backend service, you must specify the region where the NEG was created.

gcloud compute backend-services remove-backend BACKEND_SERVICE_NAME \
    --network-endpoint-group=SERVERLESS_NEG_NAME \
    --network-endpoint-group-region=REGION \
    --region=REGION

To delete the serverless NEG:

gcloud compute network-endpoint-groups delete SERVERLESS_NEG_NAME \
    --region=REGION

What's next