Set up a regional external Application Load Balancer with Cloud Run

This page shows you how to deploy a regional external Application Load Balancer with a Cloud Run backend. To set this up, you use a serverless NEG backend for the load balancer.

Before you try this procedure, make sure you are familiar with the following topics:

The guide shows you how to configure an Application Load Balancer that proxies requests to a serverless NEG backend.

Serverless NEGs let you use Cloud Run services with your load balancer. After you configure a load balancer with the serverless NEG backend, requests to the load balancer are routed to the Cloud Run backend.

Before you begin

  1. Install Google Cloud CLI.
  2. Deploy a Cloud Run service.
  3. Configure permissions.

Install Google Cloud SDK

Install the Google Cloud CLI tool. See gcloud overview for conceptual and installation information about the tool.

If you haven't run the gcloud CLI previously, first run gcloud init to initialize your gcloud directory.

Deploy a Cloud Run service

The instructions on this page assume you already have a Cloud Run service running.

For the example on this page, you can use any of the Cloud Run quickstarts to deploy a Cloud Run service.

The serverless NEG and the load balancer must be in the same region as the Cloud Run service. You can block external requests that are sent directly to the Cloud Run service's default URLs by restricting ingress to internal and cloud load balancing. For example:

gcloud run deploy CLOUD_RUN_SERVICE_NAME \
    --platform=managed \
    --allow-unauthenticated \
    --ingress=internal-and-cloud-load-balancing \
    --region=REGION \
    --image=IMAGE_URL

Note the name of the service that you create. The rest of this page shows you how to set up a load balancer that routes requests to this service.

Configure permissions

To follow this guide, you need to create a serverless NEG and create a load balancer in a project. You must be either a project owner or editor, or you have the following Compute Engine IAM roles and permissions:

Task Required role
Create load balancer and networking components Network Admin
Create and modify NEGs Compute Instance Admin
Create and modify SSL certificates Security Admin

Configure the network and subnets

To configure the network and its subnets, you'll perform the following tasks:

  • Create a VPC network and subnet.
  • Create a proxy-only subnet.

Create the VPC network

Create a custom mode VPC network, then the subnets that you want within a region.

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. For Name, enter lb-network.

  4. For Subnet creation mode, select Custom.

  5. In the New subnet section, specify the following configuration parameters for a subnet:

    1. For Name, enter lb-subnet.
    2. Select a Region.
    3. For IP address range, enter 10.1.2.0/24.
    4. Click Done.
  6. Click Create.

gcloud

  1. Create the custom VPC network by using the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network. This example uses an IP address range of 10.1.2.0/24 for the subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create lb-subnet \
    --network=lb-network \
    --range=10.1.2.0/24 \
    --region=REGION
    

Create a proxy-only subnet

Create a proxy-only subnet for all regional Envoy-based load balancers in a specific region of the lb-network network.

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click the name of the Shared VPC network that you want to add the proxy-only subnet to.

  3. Click Add subnet.

  4. In the Name field, enter proxy-only-subnet.

  5. Select a Region.

  6. Set Purpose to Regional Managed Proxy.

  7. Enter an IP address range as 10.129.0.0/23.

  8. Click Add.

gcloud

  1. Create the proxy-only subnet by using the gcloud compute networks subnets create command.

    This example uses an IP address range of 10.129.0.0/23 for the proxy-only subnet. You can configure any valid subnet range.

    gcloud compute networks subnets create proxy-only-subnet \
     --purpose=REGIONAL_MANAGED_PROXY \
     --role=ACTIVE \
     --region=REGION \
     --network=lb-network \
     --range=10.129.0.0/23
    

Create the load balancer

In the following diagram, the load balancer uses a serverless NEG backend to direct requests to a serverless Cloud Run service.

Traffic going from the load balancer to the serverless NEG backends uses special routes defined outside your VPC that are not subject to firewall rules. Therefore, if your load balancer only has serverless NEG backends, you don't need to create firewall rules to allow traffic from the proxy-only subnet to the serverless backend.

Regional external HTTP(S) load balancing architecture for a Cloud Run application.
Regional external HTTP(S) load balancing architecture for a Cloud Run application (click to enlarge).

Console

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
  4. For Public facing or internal, select Public facing (external) and click Next.
  5. For Global or single region deployment, select Best for regional workloads and click Next.
  6. Click Configure.

Basic configuration

  1. For the name of the load balancer, enter serverless-lb.
  2. Select the Network as lb_network.
  3. Keep the window open to continue.

Configure the frontend

  1. Before you proceed, make sure you have an SSL certificate.
  2. Click Frontend configuration.
  3. Enter a Name.
  4. To configure a regional external Application Load Balancer, fill in the fields as follows.
    1. Under Protocol, select HTTPS.
    2. Under Network service tier, select Standard.
    3. Under IP version, select IPv4.
    4. Under IP address, select Ephemeral.
    5. Under Port, select 443.
    6. Under Certificate, select an existing SSL certificate or create a new certificate.

      The following example shows you how to create Compute Engine SSL certificates:

    7. Click Create a new certificate.
      1. In the Name field, enter a name.
      2. In the appropriate fields, upload your PEM-formatted files:
        • Certificate
        • Private key
      3. Click Create.

    If you want to test this process without setting up an SSL certificate resource, you can set up an HTTP load balancer.

  5. Optional: To create an HTTP load balancer, do the following:
    1. Under Protocol, select HTTP.
    2. Under Network service tier, select Standard.
    3. Under IP version, select IPv4.
    4. Under IP address, select Ephemeral.
    5. Under Port, select 80.
  6. Click Done.

Configure the backend services

  1. Click Backend configuration.
  2. In the Create or select backend services drop-down menu, hold the pointer over Backend services, and then select Create a backend service.
  3. In the Create a backend service window, enter a Name.
  4. Under Backend type, select Serverless network endpoint group.
  5. Leave Protocol unchanged. This parameter is ignored.
  6. Under Backends > New backend, select Create serverless network endpoint group.
    1. In the Create serverless network endpoint group window, enter a Name.
    2. Under Region, the region of the load balancer is displayed.
    3. From the Serverless network endpoint group type field, select Cloud Run. Cloud Run is the only supported type.
    4. Select Select service name.
    5. From the Service drop-down list, select the Cloud Run service that you want to create a load balancer for.
    6. Click Done.
    7. Click Create.
  7. In the Create backend service window, click Create.

Configure routing rules

Routing rules determine how your traffic is directed. You can direct traffic to a backend service or a Kubernetes service. Any traffic not explicitly matched with a host and path matcher is sent to the default service.

  1. Click Simple host and path rule.
  2. Select a backend service from the Backend drop-down list.

Review the configuration

  1. Click Review and finalize.
  2. Review the values for Backend, Host and Path rules and Frontend.
  3. Optional: Click Equivalent Code to view the REST API request that will be used to create the load balancer.
  4. Click Create. Wait for the load balancer to be created.
  5. Click the name of the load balancer (serverless-lb).
  6. Note the IP address of the load balancer for the next task.

gcloud

  1. Reserve a static external IP address for the load balancer.
        gcloud compute addresses create IP_ADDRESS_NAME  \
            --region=REGION \
            --network-tier=STANDARD
        
  2. Create a serverless NEG for your Cloud Run service:
        gcloud compute network-endpoint-groups create SERVERLESS_NEG_NAME \
            --region=REGION \
            --network-endpoint-type=serverless  \
            --cloud-run-service=CLOUD_RUN_SERVICE_NAME
        
  3. Create a regional backend service. Set the --protocol to HTTP. This parameter is ignored but it is required because the --protocol otherwise defaults to TCP.
        gcloud compute backend-services create BACKEND_SERVICE_NAME \
            --load-balancing-scheme=EXTERNAL_MANAGED \
            --protocol=HTTP \
            --region=REGION
        
  4. Add the serverless NEG as a backend to the backend service:
        gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
            --region=REGION \
            --network-endpoint-group=SERVERLESS_NEG_NAME \
            --network-endpoint-group-region=REGION
        
  5. Create a regional URL map to route incoming requests to the backend service:
        gcloud compute url-maps create URL_MAP_NAME \
            --default-service=BACKEND_SERVICE_NAME \
            --region=REGION
        
    This example URL map only targets one backend service representing a single serverless app, so you don't need to set up host rules or path matchers.
  6. Optional: Perform this step if you are using HTTPS between the client and the load balancer. This step is not required for HTTP load balancers.

    You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager:

    • Regional self-managed certificates. For information about creating and using regional self-managed certificates, see deploy a regional self-managed certificate. Certificate maps are not supported.

    • Regional Google-managed certificates. Certificate maps are not supported.

      The following types of regional Google-managed certificates are supported by Certificate Manager:

    • After you create certificates, attach the certificate directly to the target proxy.

      To create a regional self-managed SSL certificate resource:
          gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
              --certificate CRT_FILE_PATH \
              --private-key KEY_FILE_PATH \
              --region=REGION
          
    • Create a regional target proxy to route requests to the URL map.

      For an HTTP load balancer, create an HTTP target proxy:
          gcloud compute target-http-proxies create TARGET_HTTP_PROXY_NAME \
              --url-map=URL_MAP_NAME \
              --region=REGION
          
      For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.
          gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \
              --ssl-certificates=SSL_CERTIFICATE_NAME \
              --url-map=URL_MAP_NAME \
              --region=REGION
          
    • Create a forwarding rule to route incoming requests to the proxy. For an HTTP load balancer:
          gcloud compute forwarding-rules create HTTP_FORWARDING_RULE_NAME \
              --load-balancing-scheme=EXTERNAL_MANAGED \
              --network-tier=STANDARD \
              --network=lb-network \
              --target-http-proxy=TARGET_HTTP_PROXY_NAME \
              --target-http-proxy-region=REGION \
              --region=REGION \
              --ports=80
          
      For an HTTPS load balancer:
          gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \
              --load-balancing-scheme=EXTERNAL_MANAGED \
              --network-tier=STANDARD \
              --network=lb-network \
              --target-https-proxy=TARGET_HTTPS_PROXY_NAME \
              --target-https-proxy-region=REGION \
              --region=REGION \
              --ports=443
          

Test the load balancer

Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address.

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the load balancer you just created.

  3. Note the IP Address of the load balancer.

  4. For an HTTP load balancer, you can test your load balancer using a web browser by going to http://IP_ADDRESS. Replace IP_ADDRESS with the load balancer's IP address. You should be directed to the Cloud Run service homepage.

  5. For an HTTPS load balancer, you can test your load balancer using a web browser by going to https://IP_ADDRESS. Replace IP_ADDRESS with the load balancer's IP address. You are directed to the Cloud Run service homepage.
    If you used a self-signed certificate for testing, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate. Click through the warning to see the actual page.

Additional configuration options

This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.

Using a URL mask

When creating a serverless NEG, instead of selecting a specific Cloud Run service, you can use a URL mask to point to multiple services serving at the same domain. A URL mask is a template of your URL schema. The serverless NEG uses this template to extract the service name from the incoming request's URL and map the request to the appropriate service.

URL masks are particularly useful if your service is mapped to a custom domain rather than to the default address that Google Cloud provides for the deployed service. A URL mask lets you target multiple services and versions with a single rule even when your application is using a custom URL pattern.

If you haven't already done so, make sure you read the Serverless NEGS overview: URL Masks.

Construct a URL mask

To construct a URL mask for your load balancer, start with the URL of your service. This example uses a sample serverless app running at https://example.com/login. This is the URL where the app's login service is served.

  1. Remove the http or https from the URL. You are left with example.com/login.
  2. Replace the service name with a placeholder for the URL mask.
    • Cloud Run: Replace the Cloud Run service name with the placeholder <service>. If the Cloud Run service has a tag associated with it, replace the tag name with the placeholder <tag>. In this example, the URL mask you are left with is example.com/<service>.
  3. Optional: If the service name can be extracted from the path portion of the URL, the domain can be omitted. The path part of the URL mask is distinguished by the first slash (/) character. If a slash (/) is not present in the URL mask, the mask is understood to represent the host only. Therefore, for this example, the URL mask can be reduced to /<service>.

    Similarly, if <service> can be extracted from the host part of the URL, you can omit the path altogether from the URL mask.

    You can also omit any host or subdomain components that come before the first placeholder as well as any path components that come after the last placeholder. In such cases, the placeholder captures the required information for the component.

Here are a few more examples that demonstrate these rules:

This table assumes that you have a custom domain called example.com and all your Cloud Run services are being mapped to this domain.

Service, Tag name Cloud Run custom domain URL URL mask
service: login https://login-home.example.com/web <service>-home.example.com
service: login https://example.com/login/web example.com/<service> or /<service>
service: login, tag: test https://test.login.example.com/web <tag>.<service>.example.com
service: login, tag: test https://example.com/home/login/test example.com/home/<service>/<tag> or /home/<service>/<tag>
service: login, tag: test https://test.example.com/home/login/web <tag>.example.com/home/<service>

Creating a serverless NEG with a URL mask

Console

For a new load balancer, you can use the same end-to-end process as described previously in this document. When configuring the backend service, instead of selecting a specific service, enter a URL mask.

If you have an existing load balancer, you can edit the backend configuration and have the serverless NEG point to a URL mask instead of a specific service.

To add a URL mask-based serverless NEG to an existing backend service, do the following:

  1. In the Google Cloud console, go to the Load balancing page.
    Go to Load balancing
  2. Click the name of the load balancer that has the backend service you want to edit.
  3. On the Load balancer details page, click Edit.
  4. On the Edit global external Application Load Balancer page, click Backend configuration.
  5. On the Backend configuration page, click Edit for the backend service you want to modify.
  6. Click Add backend.
  7. Select Create Serverless network endpoint group.
    1. For the Name, enter helloworld-serverless-neg.
    2. Under Region, the region of the load balancer is displayed.
    3. Under Serverless network endpoint group type, Cloud Run is the only supported network endpoint group type.
      1. Select Use URL Mask.
      2. Enter a URL mask. For information about how to create a URL mask, see Constructing a URL mask.
      3. Click Create.

  8. In the New backend, click Done.
  9. Click Update.

gcloud

To create a serverless NEG with a sample URL mask of example.com/<service>:

gcloud compute network-endpoint-groups create SERVERLESS_NEG_MASK_NAME \
    --region=REGION \
    --network-endpoint-type=serverless \
    --cloud-run-url-mask="example.com/<service>"

Deleting a serverless NEG

A network endpoint group cannot be deleted if it is attached to a backend service. Before you delete a NEG, ensure that it is detached from the backend service.

Console

  1. To make sure the serverless NEG you want to delete is not in use by any backend service, go to the Backend services tab on the Load balancing components page.
    Go to Backend services
  2. If the serverless NEG is in use, do the following:
    1. Click the name of the backend service that is using the serverless NEG.
    2. Click Edit.
    3. From the list of Backends, click to remove the serverless NEG backend from the backend service.
    4. Click Save.

  3. Go to the Network endpoint group page in the Google Cloud console.
    Go to Network endpoint group
  4. Select the checkbox for the serverless NEG you want to delete.
  5. Click Delete.
  6. Click Delete again to confirm.

gcloud

To remove a serverless NEG from a backend service, you must specify the region where the NEG was created.

gcloud compute backend-services remove-backend BACKEND_SERVICE_NAME \
    --network-endpoint-group=SERVERLESS_NEG_NAME \
    --network-endpoint-group-region=REGION \
    --region=REGION

To delete the serverless NEG:

gcloud compute network-endpoint-groups delete SERVERLESS_NEG_NAME \
    --region=REGION

What's next