Set up a global external Application Load Balancer with Shared VPC

This document shows you a sample configuration for setting up a global external Application Load Balancer with VM instance group backends in a Shared VPC environment.

In the example, the load balancer's frontend and backend components are created in one service project.

If you don't want to use a Shared VPC network, see Set up a global external Application Load Balancer with VM instance group backends.

Before you begin

Permissions required

Setting up a load balancer on a Shared VPC network requires some initial setup and provisioning by an administrator. After the initial setup, a service project owner can do one of the following:

  • Deploy all the load balancer's components and its backends in a service project.
  • Deploy the load balancer's backend components (backend service and backends) in service projects that can be referenced by a URL map in another service or host project.

This section summarizes the permissions required to follow this guide to set up a load balancer on a Shared VPC network.

Set up Shared VPC

The following roles are required for the following tasks:

  1. Perform one-off administrative tasks such as setting up the Shared VPC and enabling a host project.
  2. Perform administrative tasks that must be repeated every time you want to onboard a new service project. This includes attaching the service project, provisioning and configuring networking resources, and granting access to the service project administrator.

These tasks must be performed in the Shared VPC host project. We recommend that the Shared VPC Admin also be the owner of the Shared VPC host project. This automatically grants the Network Admin and Security Admin roles.

Task Required role
Set up Shared VPC, enable host project, and grant access to service project administrators Shared VPC Admin
Create subnets in the Shared VPC host project and grant access to service project administrators Network Admin
Add and remove firewall rules Security Admin

After the subnets have been provisioned, the host project owner must grant the Network User role in the host project to anyone (typically service project administrators, developers, or service accounts) who needs to use these resources.

Task Required role
Use VPC networks and subnets belonging to the host project Network User

This role can be granted on the project level or for individual subnets. We recommend that you grant the role on individual subnets. Granting the role on the project provides access to all current and future subnets in the VPC network of the host project.

Deploy load balancer and backends

Service project administrators need the following roles in the service project to create load balancing resources and backends. These permissions are granted automatically to the service project owner or editor.

Roles granted in the service project
Task Required role
Create load balancer components Network Admin
Create instances Instance Admin
Create and modify SSL certificates Security Admin

Prerequisites

In this section, you need to perform the following steps:

  1. Configure the network and subnets in the host project.
  2. Set up Shared VPC in the host project.

The steps in this section do not need to be performed every time you want to create a new load balancer. However, you must ensure that you have access to the resources described here before you proceed to creating the load balancer.

Configure the network and subnets in the host project

You need a Shared VPC network with a subnet for the load balancer's backends.

This example uses the following network, region, and subnet:

  • Network. The network is named lb-network.

  • Subnet for load balancer's backends. A subnet named lb-backend-subnet in the us-west1 region uses 10.1.2.0/24 for its primary IP range.

Configure the subnet for the load balancer's backends

This step does not need to be performed every time you want to create a new load balancer. You only need to ensure that the service project has access to a subnet in the Shared VPC network.

All the steps in this section must be performed in the host project.

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.
  3. For Name, enter lb-network.
  4. In the Subnets section:

    1. Set the Subnet creation mode to Custom.
    2. In the New subnet section, enter the following information:

      • Name: lb-backend-subnet
      • Region: us-west1

      • IP address range: 10.1.2.0/24

    3. Click Done.

  5. Click Create.

gcloud

  1. Create a VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network in the us-west1 region:

    gcloud compute networks subnets create lb-frontend-and-backend-subnet \
        --network=lb-network \
        --range=10.1.2.0/24 \
        --region=us-west1
    

Give service project admins access to the backend subnet

Service project administrators require access to the lb-backend-subnet subnet so that they can provision the load balancer's backends.

A Shared VPC Admin must grant access to the backend subnet to service project administrators (or developers who deploy resources and backends that use the subnet). For instructions, see Service Project Admins for some subnets.

Configure firewall rules in the host project

This example uses the following firewall rule:
  • fw-allow-health-check. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems in 130.211.0.0/22 and 35.191.0.0/16. This example uses the target tag load-balanced-backend to identify the instances to which it should apply.
Without this firewall rule, the default deny ingress rule blocks incoming traffic to the backend instances.

All the steps in this section must be performed in the host project.

Console

  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. Click Create firewall rule to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-health-check
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 130.211.0.0/22 and 35.191.0.0/16
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Check TCP and enter 80 for the port number.
      • As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use tcp:80 for the protocol and port, Google Cloud can use HTTP on port 80 to contact your VMs, but it cannot use HTTPS on port 443 to contact them.

  3. Click Create.

gcloud

  1. Create the fw-allow-health-check firewall rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers. However, you can configure a narrower set of ports to meet your needs.

    gcloud compute firewall-rules create fw-allow-health-check \
       --network=lb-network \
       --action=allow \
       --direction=ingress \
       --source-ranges=130.211.0.0/22,35.191.0.0/16 \
       --target-tags=load-balanced-backend \
       --rules=tcp
    

Set up Shared VPC in the host project

This step entails enabling a Shared VPC host project, sharing subnets of the host project, and attaching service projects to the host project so that the service projects can use the Shared VPC network. To set up Shared VPC in the host project, see the following pages:

The rest of these instructions assume that you have already set up Shared VPC. This includes setting up IAM policies for your organization and designating the host and service projects.

Don't proceed until you have set up Shared VPC and enabled the host and service projects.

Configure a load balancer in one service project

After you have configured the VPC network in the host project and set up Shared VPC, you can switch your attention to the service project, in which you need to create all the load balancing components (backend service, URL map, target proxy, and forwarding rule) and the backends.

This section assumes that you have carried out the prerequisite steps described in the previous section in the host project. In this section, the load balancer's frontend and backend components along with the backends are created in one service project.

The following figure depicts the components of a global external Application Load Balancer in one service project, which is attached to the host project in a Shared VPC network.

Load balancer's frontend and backend components in one service project
Figure 1. Load balancer's frontend and backend components in one service project

These steps should be carried out by the service project administrator (or a developer operating within the service project) and do not require involvement from the host project administrator. The steps in this section are similar to the standard steps to set up a global external Application Load Balancer.

The example on this page explicitly sets a reserved IP address for the global external Application Load Balancer's forwarding rule, rather than allowing an ephemeral IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.

Create a managed instance group backend

The precursor to creating a managed instance group is the creation of an instance template, which is a resource that you can use to create virtual machine (VM) instances. Traffic from clients is load balanced to VMs in an instance group. The managed instance group provides VMs that run the backend servers of an external Application Load Balancer. In this example, the backends serve their own hostnames.

Console

Create an instance template

  1. In the Google Cloud console, go to the Compute Engine Instance templates page.

    Go to Instance templates

  2. Click Create instance template.

  3. For Name, enter backend-template.

  4. In the Boot disk section, ensure that the boot disk is set to a Debian image, such as Debian GNU/Linux 10 (buster). Click Change to change the image if necessary.

  5. Expand the Advanced options section.

  6. Expand the Networking section, and in the Network tags field, enter load-balanced-backend.

  7. For Network interfaces, select Networks shared with me (from host project: HOST_PROJECT_ID).

  8. In the Shared subnetwork list, select the lb-backend-subnet subnet from the lb-network network.

  9. Expand the Management section, and in the Automation field, specify the following startup script:

     #! /bin/bash
     apt-get update
     apt-get install apache2 -y
     a2ensite default-ssl
     a2enmod ssl
     vm_hostname="$(curl -H "Metadata-Flavor:Google" \
     http://metadata.google.internal/computeMetadata/v1/instance/name)"
     echo "Page served from: $vm_hostname" | \
     tee /var/www/html/index.html
     systemctl restart apache2
    
  10. Click Create.

Create a managed instance group

  1. In the Google Cloud console, go to the Compute Engine Instance groups page.

    Go to Instance groups

  2. Click Create Instance Group.

  3. From the options, select New managed instance group (stateless).

  4. For the name of the instance group, enter lb-backend.

  5. In the Instance template list, select the instance template backend-template that you created in the previous step.

  6. In the Location section, select Single zone, and enter the following values:

    • For Region, select us-west1.

    • For Zone, select us-west1-a.

  7. In the Autoscaling section, enter the following values:

    • For Autoscaling mode, select On: add and remove instances to the group.

    • For Minimum number of instances, select 2.

    • For Maximum number of instances, select 3.

  8. In the Port mapping section, click Add port, and enter the following values:

    • For Port name, enter http.

    • For Port number, enter 80.

  9. Click Create.

gcloud

  1. Create an instance template:

    gcloud compute instance-templates create backend-template \
        --region=us-west1 \
        --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
        --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-backend-subnet \
        --tags=load-balanced-backend \
        --image-family=debian-10 \
        --image-project=debian-cloud \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://metadata.google.internal/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2' \
        --project=SERVICE_PROJECT_ID
    
  2. Create a managed instance group and select the instance template that you created in the preceding step:

    gcloud compute instance-groups managed create lb-backend \
        --zone=us-west1-a \
        --size=2 \
        --template=backend-template \
        --project=SERVICE_PROJECT_ID
    
  3. Add a named port to the instance group:

    gcloud compute instance-groups set-named-ports lb-backend \
        --named-ports=http:80 \
        --zone=us-west1-a \
        --project=SERVICE_PROJECT_ID
    

Create a health check

Health checks are tests that confirm the availability of backends. Create a health check that uses the HTTP protocol and probes on port 80. Later, you'll attach this health check to the backend service referenced by the load balancer.

Console

  1. In the Google Cloud console, go to the Compute Engine Health checks page.

    Go to Health checks

  2. For the name of the health check, enter lb-health-check.

  3. Set the protocol to HTTP.

  4. Click Create.

gcloud

Create an HTTP health check.

gcloud compute health-checks create http lb-health-check \
  --use-serving-port \
  --project=SERVICE_PROJECT_ID

Reserve the load balancer's IP address

Reserve a global static external IP address that can be assigned to the forwarding rule of the load balancer.

Console

  1. In the Google Cloud console, go to the VPC IP addresses page.

    Go to IP addresses

  2. Click Reserve external static IP address.

  3. For Name, enter lb-ipv4-1.

  4. Set Network Service Tier to Premium.

  5. Set IP version to IPv4.

  6. Set Type to Global.

  7. Click Reserve.

gcloud

Create a global static external IP address.

gcloud compute addresses create lb-ipv4-1 \
  --ip-version=IPV4 \
  --network-tier=PREMIUM \
  --global
  --project=SERVICE_PROJECT_ID

Set up an SSL certificate resource

For a load balancer that uses HTTPS as the request-and-response protocol, create an SSL certificate resource as described in the following resources:

We recommend using a Google-managed certificate.

This example assumes that you have created an SSL certificate named lb-ssl-cert. The SSL certificate is attached to the target proxy that you will create in one of the following steps.

Configure the load balancer

This section shows you how to create the following resources for a global external Application Load Balancer:

  • Backend service with a managed instance group as the backend
  • URL map
  • SSL certificate (required only for HTTPS traffic)
  • Target proxy
  • Forwarding rule

In this example, you can use HTTP or HTTPS as the request-and-response protocol between the client and the load balancer. For HTTPS, you need an SSL certificate resource to configure the proxy. We recommend using a Google-managed certificate.

Console

Start your configuration

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next.
  4. For Public facing or internal, select Public facing (external) and click Next.
  5. For Global or single region deployment, select Best for global workloads and click Next.
  6. For Load balancer generation, select Global external Application Load Balancer and click Next.
  7. Click Configure.

Basic configuration

  1. For the load balancer name, enter l7-xlb-shared-vpc.

Configure the load balancer frontend

For HTTP traffic:

  1. Click Frontend configuration.

  2. For the name of the load balancer frontend, enter http-fw-rule.

  3. For Protocol, select HTTP.

  4. Set IP version to IPv4.

  5. For IP address, select lb-ipv4-1, which is the IP address that you reserved earlier.

  6. Set the Port to 80 to allow HTTP traffic.

  7. To complete the frontend configuration, click Done.

  8. Verify that there is a blue check mark next to Frontend configuration before continuing.

For HTTPS traffic:

  1. Click Frontend configuration.

  2. For the name of the load balancer frontend, enter https-fw-rule.

  3. For Protocol, select HTTPS.

  4. Set IP version to IPv4.

  5. For IP address, select lb-ipv4-1, which is the IP address that you reserved earlier.

  6. Set the Port to 443 to allow HTTPS traffic.

  7. In the Certificate list, select the SSL certificate that you created.

  8. To complete the frontend configuration, click Done.

  9. Verify that there is a blue check mark next to Frontend configuration before continuing.

Configure the backend

  1. Click Backend configuration.

  2. In the Backend services and backend buckets menu, click Create a backend service.

  3. For the name of the backend service, enter lb-backend-service.

  4. For Backend type, select Instance group.

  5. Set Protocol to HTTP.

  6. In the Named port field, enter http. This is the same port name that you entered while creating the managed instance group.

  7. To add backends to the backend service, do the following:

    1. In the Backends section, set the Instance group to lb-backend, which is the managed instance group that you created in an earlier step.

    2. For Port numbers, enter 80.

    3. To add the backend, click Done.

  8. To add a health check, in the Health check list, select lb-health-check, which is the health check that you created earlier.

  9. To create the backend service, click Create.

  10. Verify that there is a blue check mark next to Backend configuration before continuing.

Configure the routing rules

  • Click Routing rules. Ensure that lb-backend-service is the default backend service for any unmatched host and any unmatched path.

For information about traffic management, see Set up traffic management.

Review and finalize the configuration

  1. Click Review and finalize.

  2. Review the frontend and backend settings of the load balancer to ensure that it is configured as desired.

  3. Click Create, and then wait for the load balancer to be created.

gcloud

  1. Create a backend service to distribute traffic among backends:

    gcloud compute backend-services create lb-backend-service \
        --load-balancing-scheme=EXTERNAL_MANAGED \
        --protocol=HTTP \
        --port-name=http \
        --health-checks=lb-health-check \
        --global \
        --project=SERVICE_PROJECT_ID
    
  2. Add your instance group as the backend to the backend service:

    gcloud compute backend-services add-backend lb-backend-service \
        --instance-group=lb-backend \
        --instance-group-zone=us-west1-a \
        --global \
        --project=SERVICE_PROJECT_ID
    
  3. Create a URL map to route incoming requests to the backend service:

    gcloud compute url-maps create lb-map \
        --default-service=lb-backend-service \
        --global \
        --project=SERVICE_PROJECT_ID
    
  4. Create a target proxy.

    For HTTP traffic, create a target HTTP proxy to route requests to the URL map:

    gcloud compute target-http-proxies create http-proxy \
        --url-map=lb-map \
        --global \
        --project=SERVICE_PROJECT_ID
    

    For HTTPS traffic, create a target HTTPS proxy to route requests to the URL map. The proxy is the part of the load balancer that holds the SSL certificate for an HTTPS load balancer, so you also load your SSL certificate in this step:

    gcloud compute target-https-proxies create https-proxy \
        --url-map=lb-map \
        --ssl-certificates=lb-ssl-cert
        --global \
        --project=SERVICE_PROJECT_ID
    
  5. Create a forwarding rule.

    For HTTP traffic, create a global forwarding rule to route incoming requests to the target proxy:

    gcloud compute forwarding-rules create http-fw-rule \
        --load-balancing-scheme=EXTERNAL_MANAGED \
        --address=lb-ipv4-1 \
        --global \
        --target-http-proxy=http-proxy \
        --ports=80 \
        --project=SERVICE_PROJECT_ID
    

    For HTTPS traffic, create a global forwarding rule to route incoming requests to the target proxy:

    gcloud compute forwarding-rules create https-fw-rule \
        --load-balancing-scheme=EXTERNAL_MANAGED \
        --address=lb-ipv4-1 \
        --global \
        --target-https-proxy=https-proxy \
        --ports=443 \
        --project=SERVICE_PROJECT_ID
    

Test the load balancer

When the load balancing service is running, you can send traffic to the forwarding rule and watch the traffic be dispersed to different instances.

Console

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click the load balancer that you just created.

  3. Note the load balancer's IP address. This IP address is referred to as LB_IP_ADDRESS in the following steps.

  4. In the Backend section, confirm that the VMs are healthy.

    The Healthy column should be populated, indicating that the VMs are healthy—for example, if two instances are created, then you should see a message indicating 2 of 2 with a green check mark next to it. If you see otherwise, first try reloading the page. It can take a few minutes for the Google Cloud console to indicate that the VMs are healthy. If the backends do not appear healthy after a few minutes, review the firewall configuration and the network tag assigned to your backend VMs.

  5. After the Google Cloud console shows that the backend instances are healthy, you can test your load balancer by pointing your web browser to https://LB_IP_ADDRESS (or http://LB_IP_ADDRESS). Replace LB_IP_ADDRESS with the load balancer's IP address.

  6. If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.

  7. Your browser should render a page with content showing the name of the instance that served the page (for example, Page served from: lb-backend-example-xxxx). If your browser doesn't render this page, review the configuration settings in this guide.

gcloud

Note the IP address that was reserved:

gcloud compute addresses describe IP_ADDRESS_NAME \
    --format="get(address)" \
    --global

You can test your load balancer by pointing your web browser to https://LB_IP_ADDRESS (or http://LB_IP_ADDRESS). Replace LB_IP_ADDRESS with the load balancer's IP address.

If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.

Your browser should render a page with minimal information about the backend instance. If your browser doesn't render this page, review the configuration settings in this guide.

What's next