Setting up Internal HTTP(S) Load Balancing with Shared VPC

This document shows you two sample configurations for setting up Internal HTTP(S) Load Balancing in a Shared VPC environment:

  • The first example creates all the load balancer components and backends in the service project.
  • The second example creates the load balancer's frontend components and URL map in a service project, while the load balancer's backend service and backends are created in a different service project. This type of deployment, where the URL map references a backend service in another project, is referred to as Cross-project service referencing.

Both examples require the same upfront configuration to grant permissions and set up Shared VPC before you can start creating load balancers.

These are not the only Shared VPC configurations supported by Internal HTTP(S) Load Balancing. For other valid Shared VPC architectures, see Shared VPC architectures.

If you don't want to use a Shared VPC network, see Setting up Internal HTTP(S) Load Balancing.

Before you begin

  1. Read Shared VPC overview.
  2. Read Internal HTTP(S) Load Balancing overview, including the Shared VPC architectures section.

Permissions

Setting up internal HTTP(S) load balancer for Shared VPC requires some up-front setup and provisioning by an administrator. After the initial setup, a service project owner can either deploy all the load balancer's components and its backends in a service project, or, deploy the load balancer's backend components (backend service and backends) in service projects which can be referenced by a URL map in another service or host project.

This section summarizes the permissions required to follow this guide to set up an internal HTTP(S) load balancer on a Shared VPC network.

Set up Shared VPC

The following roles are required to:

  1. Perform one-off administrative tasks such as setting up the Shared VPC and enabling a host project.
  2. Perform administrative tasks that must be repeated every time you want to onboard a new service project. This includes attaching the service project, provisioning and configuring networking resources, and granting access to the service project administrator.

These tasks must be performed in the Shared VPC host project. We recommend that the Shared VPC Admin also be the owner of the Shared VPC host project. This automatically grants the Network Admin and Security Admin roles.

Task Required role
Set up Shared VPC, enable host project, and grant access to service project administrators Shared VPC Admin
Create subnets in the Shared VPC and grant access to service project administrators Network Admin
Add and remove firewall rules Security Admin

Once the subnets have been provisioned, the host project owner must grant the Network User role in the host project to anyone (typically service project administrators or developers) who needs to use these resources.

Task Required role
Use VPCs and subnets belonging to the host project Network User

This role can be granted on the project level or for individual subnets. We recommend that you grant the role on individual subnets. Granting the role on the project provides access to all current and future subnets in the VPC of the host project.

Deploy and update internal HTTP(S) load balancer and backends

Service project administrators need the following roles in the service project to create load balancing resources and backends. The permissions granted by these roles are granted automatically if you are a service project owner or editor.

Roles granted in the service project
Task Required role
Create load balancer components Network Admin
Create instances Instance Admin
Create and modify SSL certificates Security Admin

For more information, see the following guides:

Prerequisites

The steps in this section do not need to be performed every time you want to create an internal HTTP(S) load balancer. However, you must ensure that you have access to the resources described here before you proceed to creating the load balancer.

Set up Shared VPC with a host and service project

  1. Set up Shared VPC.
  2. Enable a host project.
  3. Attach a service project.

The rest of these instructions assume that you have already set up Shared VPC. This involves setting up IAM policies for your organization, and designating the host and service projects.

Do not proceed until you have set up Shared VPC and enabled the host and service projects.

Configure the network and subnets in the host project

You need a Shared VPC network with two subnets: one for the load balancer's frontend and backends, and the other for the load balancer's proxies.

This example uses the following network, region, and subnets:

  • Network. The network is named lb-network.

  • Subnet for load balancer frontend and backends. A subnet named lb-frontend-and-backend-subnet in the us-west1 region uses 10.1.2.0/24 for its primary IP range.

  • Subnet for proxies. A subnet named proxy-only-subnet in the us-west1 region uses 10.129.0.0/23 for its primary IP range.

Configure the subnet for the load balancer's frontend and backends

This step does not need to be performed every time you want to create an internal HTTP(S) load balancer. You only need to ensure that the service project has access to a subnet in the Shared VPC network (in addition to the proxy-only subnet).

Cloud console

  1. In the Google Cloud console, go to the VPC networks page.
    Go to the VPC networks page
  2. Click Create VPC network.
  3. For the Name, enter lb-network.
  4. In the Subnets section:
    • Set the Subnet creation mode to Custom.
    • In the New subnet section, enter the following information:
      • Name: lb-frontend-and-backend-subnet
      • Region: us-west1
      • IP address range: 10.1.2.0/24
    • Click Done.
  5. Click Create.

gcloud

  1. Create a VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network in the us-west1 region:

    gcloud compute networks subnets create lb-frontend-and-backend-subnet \
        --network=lb-network \
        --range=10.1.2.0/24 \
        --region=us-west1
    

Configure the proxy-only subnet

The proxy-only subnet is used by all internal HTTP(S) load balancers in the us-west1 region, in the lb-network VPC network. There can only be one active proxy-only subnet per region, per network.

Do not perform this step if there is already a proxy-only subnet reserved for internal HTTP(S) load balancers in the us-west1 region in this network.

Cloud console

  1. In the Google Cloud console, go to the VPC networks page.
    Go to the VPC networks page
  2. Click the name of the Shared VPC network: lb-network.
  3. Click Add subnet.
  4. For the Name, enter proxy-only-subnet.
  5. For the Region, select us-west1.
  6. Set Purpose to Regional Managed Proxy.
  7. For the IP address range, enter 10.129.0.0/23.
  8. Click Add.

gcloud

Create the proxy-only subnet with the gcloud compute networks subnets create command.

gcloud compute networks subnets create proxy-only-subnet \
  --purpose=REGIONAL_MANAGED_PROXY \
  --role=ACTIVE \
  --region=us-west1 \
  --network=lb-network \
  --range=10.129.0.0/23

Give service project admins access to the backend subnet

Service project administrators require access to the lb-frontend-and-backend-subnet subnet so they can provision the load balancer's backends.

A Shared VPC Admin must grant access to the backend subnet to service project administrators (or developers who will deploy resources and backends that use the subnet). For instructions, see Service project admins for some subnets.

Configure firewall rules in the host project

This step does not need to be performed every time you want to create an internal HTTP(S) load balancer. This is a one-off step that must be performed by the host project administrator, in the host project.

This example uses the following firewall rules:

  • fw-allow-ssh. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh to identify the virtual machines (VMs) to which the firewall rule applies.

  • fw-allow-health-check. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in 130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag load-balanced-backend to identify the instances to which it should apply.

  • fw-allow-proxies. An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports 80, 443, and 8080 from the internal HTTP(S) load balancer's managed proxies. This example uses the target tag load-balanced-backend to identify the instances to which it should apply.

Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.

Cloud console

  1. In the Google Cloud console, go to the Firewall rules page.
    Go to the Firewall rules page
  2. Click Create firewall rule to create the rule to allow incoming SSH connections:
    • Name: fw-allow-ssh
    • Network: lb-network
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: allow-ssh
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 0.0.0.0/0
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Check tcp and type 22 for the port number.
  3. Click Create.
  4. Click Create firewall rule a second time to create the rule to allow Google Cloud health checks:
    • Name: fw-allow-health-check
    • Network: lb-network
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 130.211.0.0/22 and 35.191.0.0/16
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Check tcp and enter 80.
        As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use tcp:80 for the protocol and port, Google Cloud can use HTTP on port 80 to contact your VMs, but it cannot use HTTPS on port 443 to contact them.
  5. Click Create.
  6. Click Create firewall rule a third time to create the rule to allow the load balancer's proxy servers to connect the backends:
    • Name: fw-allow-proxies
    • Network: lb-network
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 10.129.0.0/23
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Check tcp and type 80, 443, 8080 for the port numbers.
  7. Click Create.

gcloud

  1. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  2. Create the fw-allow-health-check rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.

    gcloud compute firewall-rules create fw-allow-health-check \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --target-tags=load-balanced-backend \
        --rules=tcp
    
  3. Create the fw-allow-proxies rule to allow the internal HTTP(S) load balancer's proxies to connect to your backends.

    gcloud compute firewall-rules create fw-allow-proxies \
      --network=lb-network \
      --action=allow \
      --direction=ingress \
      --source-ranges=10.129.0.0/23 \
      --target-tags=load-balanced-backend \
      --rules=tcp:80,tcp:443,tcp:8080
    

Configure the internal HTTP(S) load balancer in the service project

This example creates an internal HTTP(S) load balancer where all the load balancing components (forwarding rule, target proxy, URL map, and backend service) and backends are created in the service project.

The internal HTTP(S) load balancer's networking resources such as the proxy-only subnet and the subnet for the backend instances are created in the host project. The firewall rules for the backend instances are also created in the host project.

Internal HTTP(S) Load Balancing on Shared VPC
Internal HTTP(S) Load Balancing on Shared VPC

This section shows you how to set up the load balancer and backends. These steps should be carried out by the service project administrator (or a developer operating within the service project) and do not require involvement from the host project administrator. The steps in this section are largely similar to the standard steps to set up Internal HTTP(S) Load Balancing.

The example on this page explicitly sets a reserved internal IP address for the internal HTTP(S) load balancer's forwarding rule, rather than allowing an ephemeral internal IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.

Create a managed instance group

This section shows how to create a template and a managed instance group. The managed instance group provides VM instances running the backend servers of an example internal HTTP(S) load balancer. Traffic from clients is load balanced to these backend servers. For demonstration purposes, backends serve their own hostnames.

Cloud console

  1. Create an instance template. In the Google Cloud console, go to the Instance templates page.

    Go to Instance templates

    1. Click Create instance template.
    2. For Name, enter l7-ilb-backend-template.
    3. Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 9 (stretch). These instructions use commands that are only available on Debian, such as apt-get. If you need to change the Boot disk, click Change.
      1. Under Operating System, select Debian.
      2. Under Version, select one of the available Debian images such as Debian GNU/Linux 9 (stretch).
      3. Click Select.
    4. Under Management, security, disks, networking, sole tenancy, on the Management tab, insert the following script into the Startup script field.

       #! /bin/bash
       sudo apt-get update
       sudo apt-get install apache2 -y
       sudo a2ensite default-ssl
       sudo a2enmod ssl
       sudo vm_hostname="$(curl -H "Metadata-Flavor:Google" 
      http://169.254.169.254/computeMetadata/v1/instance/name)" sudo echo "Page served from: $vm_hostname" |
      tee /var/www/html/index.html sudo systemctl restart apache2

    5. Under Networking, select Networks shared with me (from host project: HOST_PROJECT_ID).

    6. Select lb-network as the Network, and for the Subnet, select lb-frontend-and-backend-subnet.

    7. Add the following network tags: allow-ssh and load-balanced-backend.

    8. Click Create.

  2. Create a managed instance group. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

    1. Click Create instance group.
    2. Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
    3. For Name, enter l7-ilb-backend-example.
    4. Under Location, select Single zone.
    5. For Region, select us-west1.
    6. For Zone, select us-west1-a.
    7. Under Instance template, select l7-ilb-backend-template.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options under Autoscaling:

      • For Autoscaling mode, select Off:do not autoscale.
      • For Maximum number of instances, enter 2.

      Optionally, in the Autoscaling section of the UI, you can configure the instance group to automatically add or remove instances based on instance CPU usage.

    9. Click Create.

gcloud

The gcloud instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with the gcloud compute instance-templates create command.

    gcloud compute instance-templates create l7-ilb-backend-template \
    --region=us-west1 \
    --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
    --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
    --tags=allow-ssh,load-balanced-backend \
    --image-family=debian-9 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    sudo apt-get update
    sudo apt-get install apache2 -y
    sudo a2ensite default-ssl
    sudo a2enmod ssl
    sudo vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://169.254.169.254/computeMetadata/v1/instance/name)"
    sudo echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    sudo systemctl restart apache2' \
    --project SERVICE_PROJECT_ID
    
  2. Create a managed instance group in the zone with the gcloud compute instance-groups managed create command.

    gcloud compute instance-groups managed create l7-ilb-backend-example \
        --zone=us-west1-a \
        --size=2 \
        --template=l7-ilb-backend-template \
        --project SERVICE_PROJECT_ID
    

Configure the load balancer

This section shows you how to create the following internal HTTP(S) load balancer resources:

  • HTTP health check
  • Backend service with a managed instance group as the backend
  • A URL map
  • SSL certificate (required only for HTTPS)
  • Target proxy
  • Forwarding rule

Proxy availability

Depending on the number of service projects that are using the same Shared VPC network, you might reach quotas or limits more quickly than in the network deployment model where each Google Cloud project hosts its own network.

For example, sometimes Google Cloud regions don't have enough proxy capacity for a new internal HTTP(S) load balancer. If this happens, the Google Cloud console provides a proxy availability warning message when you are creating your load balancer. To resolve this issue, you can do one of the following:

  • Wait for the capacity issue to be resolved.
  • Contact your Google Cloud sales team to increase these limits.

Cloud console

Select a load balancer type

  1. In the Google Cloud console, go to the Load balancing page.
    Go to the Load balancing page
  2. Under HTTP(S) Load Balancing, click Start configuration.
  3. Under Internet facing or internal only, select Only between my VMs. This setting means that the load balancer is internal.
  4. Click Continue.

Prepare the load balancer

  1. For the Name of the load balancer, enter l7-ilb-shared-vpc.
  2. For the Region, select us-west1.
  3. For the Network, select Networks shared with me (from host project: HOST_PROJECT_ID).
    1. From the dropdown, select lb-network.
      If you see a Proxy-only subnet required in Shared VPC network warning, confirm that the host project admin has created the proxy-only-subnet in the us-west1 region in the lb-network Shared VPC network. Load balancer creation will succeed even if you do not have permission to view the proxy-only subnet on this page.
  4. Keep the window open to continue.

Configure the backend service

  1. Click Backend configuration.
  2. From the Create or select backend services menu, select Create a backend service.
  3. Set the Name of the backend service to l7-ilb-backend-service.
  4. Set the Backend type to Instance groups.
  5. In the New backend section:
    1. Set the Instance group to l7-ilb-backend-example.
    2. Set the Port numbers to 80.
    3. Set the Balancing mode to Utilization.
    4. Click Done.
  6. In the Health check section, choose Create a health check with the following parameters:
    1. Name: l7-ilb-basic-check
    2. Protocol: HTTP
    3. Port: 80
  7. Click Save and Continue.
  8. Click Create.

Configure the URL map

Click Routing rules. Ensure that the l7-ilb-backend-service is the only backend service for any unmatched host and any unmatched path.

For information about traffic management, see Setting up traffic management.

Configure the frontend

For HTTP:

  1. Click Frontend configuration.
  2. Click Add frontend IP and port.
  3. Set the Name to l7-ilb-forwarding-rule.
  4. Set the Protocol to HTTP.
  5. Set the Subnetwork to lb-frontend-and-backend-subnet.
    Don't select the proxy-only subnet for the frontend even if it is an option in the dropdown list.
  6. Under Internal IP, select Reserve a static internal IP address.
  7. In the panel that appears, provide the following details:
    1. Name: l7-ilb-ip
    2. In the Static IP address section, select Let me choose.
    3. In the Custom IP address section, enter 10.1.2.99.
    4. Click Reserve.
  8. Set the Port to 80.
  9. Click Done.

For HTTPS:

If you are using HTTPS between the client and the load balancer, you need one or more SSL certificate resources to configure the proxy. For information about how to create SSL certificate resources, see SSL certificates. Google-managed certificates aren't currently supported with internal HTTP(S) load balancers.

  1. Click Frontend configuration.
  2. Click Add frontend IP and port.
  3. In the Name field, enter l7-ilb-forwarding-rule.
  4. In the Protocol field, select HTTPS (includes HTTP/2).
  5. Set the Subnetwork to lb-frontend-and-backend-subnet.
    Don't select the proxy-only subnet for the frontend even if it is an option in the dropdown list.
  6. Under Internal IP, select Reserve a static internal IP address.
  7. In the panel that appears provide the following details:
    1. Name: l7-ilb-ip
    2. In the Static IP address section, select Let me choose.
    3. In the Custom IP address section, enter 10.1.2.99.
    4. Click Reserve.
  8. Ensure that the Port is set to 443, to allow HTTPS traffic.
  9. Click the Certificate drop-down list.
    1. If you already have a self-managed SSL certificate resource you want to use as the primary SSL certificate, select it from the drop-down menu.
    2. Otherwise, select Create a new certificate.
      1. Fill in a Name of l7-ilb-cert.
      2. In the appropriate fields upload your PEM-formatted files:
        • Public key certificate
        • Certificate chain
        • Private key
      3. Click Create.
  10. To add certificate resources in addition to the primary SSL certificate resource:
    1. Click Add certificate.
    2. Select a certificate from the Certificates list or click Create a new certificate and follow the instructions above.
  11. Click Done.

Review and finalize the configuration

Click Create.

gcloud

  1. Define the HTTP health check with the gcloud compute health-checks create http command.

    gcloud compute health-checks create http l7-ilb-basic-check \
       --region=us-west1 \
       --use-serving-port \
       --project SERVICE_PROJECT_ID
    
  2. Define the backend service with the gcloud compute backend-services create command.

    gcloud compute backend-services create l7-ilb-backend-service \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --protocol=HTTP \
      --health-checks=l7-ilb-basic-check \
      --health-checks-region=us-west1 \
      --region=us-west1 \
      --project SERVICE_PROJECT_ID
    
  3. Add backends to the backend service with the gcloud compute backend-services add-backend command.

    gcloud compute backend-services add-backend l7-ilb-backend-service \
      --balancing-mode=UTILIZATION \
      --instance-group=l7-ilb-backend-example \
      --instance-group-zone=us-west1-a \
      --region=us-west1 \
      --project SERVICE_PROJECT_ID
    
  4. Create the URL map with the gcloud compute url-maps create command.

    gcloud compute url-maps create l7-ilb-map \
      --default-service=l7-ilb-backend-service \
      --region=us-west1 \
      --project SERVICE_PROJECT_ID
    
  5. Create the target proxy.

    For HTTP:

    For an internal HTTP load balancer, create the target proxy with the gcloud compute target-http-proxies create command.

    gcloud compute target-http-proxies create l7-ilb-proxy \
      --url-map=l7-ilb-map \
      --url-map-region=us-west1 \
      --region=us-west1 \
      --project SERVICE_PROJECT_ID
    

    For HTTPS:

    For information about how to create SSL certificate resources, see SSL certificates. Google-managed certificates aren't currently supported with internal HTTP(S) load balancers.

    Assign your filepaths to variable names.

    export LB_CERT=path to PEM-formatted file
    
    export LB_PRIVATE_KEY=path to PEM-formatted file
    

    Create a regional SSL certificate using the gcloud compute ssl-certificates create command.

    gcloud compute ssl-certificates create l7-ilb-cert \
      --certificate=$LB_CERT \
      --private-key=$LB_PRIVATE_KEY \
      --region=us-west1
    

    Use the regional SSL certificate to create a target proxy with the gcloud compute target-https-proxies create command.

    gcloud compute target-https-proxies create l7-ilb-proxy \
      --url-map=l7-ilb-map \
      --region=us-west1 \
      --ssl-certificates=l7-ilb-cert \
      --project SERVICE_PROJECT_ID
    
  6. Create the forwarding rule.

    For custom networks, you must reference the subnet in the forwarding rule.

    For the forwarding rule's IP address, use the lb-frontend-and-backend-subnet. If you try to use the proxy-only subnet, forwarding rule creation fails.

    For HTTP:

    Use the gcloud compute forwarding-rules create command with the correct flags.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
      --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
      --address=10.1.2.99 \
      --ports=80 \
      --region=us-west1 \
      --target-http-proxy=l7-ilb-proxy \
      --target-http-proxy-region=us-west1 \
      --project SERVICE_PROJECT_ID
    

    For HTTPS:

    Create the forwarding rule with the gcloud compute forwarding-rules create command with the correct flags.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
      --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
      --address=10.1.2.99 \
      --ports=443 \
      --region=us-west1 \
      --target-https-proxy=l7-ilb-proxy \
      --target-https-proxy-region=us-west1 \
      --project SERVICE_PROJECT_ID
    

Testing

Creating a VM instance to test connectivity

Clients can be located in either the host project or any connected service project. In this example, you test that the load balancer is working by deploying a client VM in a service project. The client must use the same Shared VPC network and be in the same region as the load balancer.

gcloud compute instances create l7-ilb-client-us-west1-a \
    --image-family=debian-9 \
    --image-project=debian-cloud \
    --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
    --zone=us-west1-a \
    --tags=allow-ssh \
    --project SERVICE_PROJECT_ID

Send traffic to the load balancer

Log in to the instance that you just created and test that HTTP(S) services on the backends are reachable via the internal HTTP(S) load balancer's forwarding rule IP address, and traffic is being load balanced across the backend instances.

  1. Connect to the client instance with SSH.

    gcloud compute ssh l7-ilb-client-us-west1-a \
      --zone=us-west1-a
    
  2. Verify that the IP address is serving its hostname.

    curl 10.1.2.99
    

    For HTTPS testing, replace curl with the following:

    curl -k -s 'https//:10.1.2.99:443'
    

    The -k flag causes curl to skip certificate validation.

  3. Run 100 requests and confirm that they are load balanced.

    For HTTP:

    {
    RESULTS=
    for i in {1..100}
    do
        RESULTS="$RESULTS:$(curl --silent 10.1.2.99)"
    done
    echo "***"
    echo "*** Results of load-balancing to 10.1.2.99: "
    echo "***"
    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c
    echo
    }
    

    For HTTPS:

    {
    RESULTS=
    for i in {1..100}
    do
        RESULTS="$RESULTS:$(curl -k -s 'https://:10.1.2.99:443')"
    done
    echo "***"
    echo "*** Results of load-balancing to 10.1.2.99: "
    echo "***"
    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c
    echo
    }
    

Configure cross-project service referencing

The previous example on this page shows you how to set up a Shared VPC deployment where all the load balancer components and its backends are created in the service project.

Internal HTTP(S) Load Balancing also lets you configure Shared VPC deployments where a URL map in one host or service project can reference backend services (and backends) located across multiple service projects in Shared VPC environments. This is referred to as cross-project service referencing.

You can use the steps in this section as a reference to configure any of the supported combinations listed here:

  • Forwarding rule, target proxy and URL map in the host project, and backend service in a service project
  • Forwarding rule, target proxy and URL map in a service project, and backend service in another service project

Set up requirements

This example configures a sample load balancer with its frontend and backend in two different service projects.

If you haven't already done so, you must complete all of the prerequisite steps to set up Shared VPC and configure the network, subnets, and firewall rules required for this example. For instructions, see the following:

Load balancer frontend and backend in different service projects
Load balancer frontend and backend in different service projects

Create the backends and backend service in service project B

All the commands in this section must be executed in service project B.

  1. Create a VM instance template with an HTTP server with the gcloud compute instance-templates create command.

    gcloud compute instance-templates create BACKEND_IG_TEMPLATE \
        --region=us-west1 \
        --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
        --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
        --tags=allow-ssh,load-balanced-backend \
        --image-family=debian-9 \
        --image-project=debian-cloud \
        --metadata=startup-script='#! /bin/bash
        apt-get update
        apt-get install apache2 -y
        a2ensite default-ssl
        a2enmod ssl
        vm_hostname="$(curl -H "Metadata-Flavor:Google" \
        http://169.254.169.254/computeMetadata/v1/instance/name)"
        echo "Page served from: $vm_hostname" | \
        tee /var/www/html/index.html
        systemctl restart apache2' \
        --project SERVICE_PROJECT_B_ID
    
  2. Create a managed instance group in the zone with the gcloud compute instance-groups managed create command.

    gcloud compute instance-groups managed create BACKEND_MIG \
        --zone=us-west1-a \
        --size=2 \
        --template=BACKEND_IG_TEMPLATE \
        --project=SERVICE_PROJECT_B_ID
    
  3. Define the HTTP health check with the gcloud compute health-checks create http command.

    gcloud compute health-checks create http HTTP_HEALTH_CHECK_NAME \
       --region=us-west1 \
       --use-serving-port \
       --project=SERVICE_PROJECT_B_ID
    
  4. Define the backend service with the gcloud compute backend-services create command.

    gcloud compute backend-services create BACKEND_SERVICE_NAME \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --protocol=HTTP \
      --health-checks=HTTP_HEALTH_CHECK_NAME \
      --health-checks-region=us-west1 \
      --region=us-west1 \
      --project=SERVICE_PROJECT_B_ID
    
  5. Add backends to the backend service with the gcloud compute backend-services add-backend command.

    gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
      --balancing-mode=UTILIZATION \
      --instance-group=BACKEND_MIG \
      --instance-group-zone=us-west1-a \
      --region=us-west1 \
      --project=SERVICE_PROJECT_B_ID
    

Grant permissions to the Load Balancer Admin to use the backend service

If you want load balancers to reference backend services in other service projects, the load balancer admin must have the compute.backendServices.use permission. To grant this permission, you can use the predefined IAM role called compute.loadBalancerServiceUser. This role must be granted by the Service Project Admin and can be applied at the project level or at the individual backend service level.

In this example, a Service Project Admin from project B must run one of the following commands to grant the compute.backendServices.use permission to a Load Balancer Admin from service project A.

At the project level:

gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \
    --member="user:LOAD_BALANCER_ADMIN" \
    --role="roles/compute.loadBalancerServiceUser"

At the backend service level, Service Project Admins can use either of the following commands to grant the loadBalancerServiceUser role:

gcloud projects add-iam-policy-binding SERVICE_PROJECT_B_ID \
    --member="user:LOAD_BALANCER_ADMIN" \
    --role="roles/compute.loadBalancerServiceUser" \
    --condition='expression=resource.name=="projects/SERVICE_PROJECT_B_ID/regions/us-west1/backend-services/BACKEND_SERVICE_NAME",title=Shared VPC condition'
gcloud beta compute backend-services add-iam-policy-binding BACKEND_SERVICE_NAME
    --member="user:LOAD_BALANCER_ADMIN" \
    --role="roles/compute.loadBalancerServiceUser" \
    --project=SERVICE_PROJECT_B_ID \
    --region=us-west1

You can also configure IAM permissions so that they only apply to a subset of regional backend services by using conditions and specifying condition attributes.

Create the load balancer frontend and URL map in service project A

All the commands in this section must be executed in service project A.

  1. Create the URL map and set the default service to the backend service created in service project B.

    gcloud compute url-maps create URL_MAP_NAME \
        --default-service=projects/SERVICE_PROJECT_B_ID/regions/us-west1/backendServices/BACKEND_SERVICE_NAME \
        --region=us-west1 \
        --project=SERVICE_PROJECT_A_ID
    

    URL map creation fails if the load balancer administrator does not have the compute.backendServices.use permission for the backend service in service project B.

  2. Create the target proxy.

    For HTTP:

    gcloud compute target-http-proxies create HTTP_TARGET_PROXY_NAME \
      --url-map=URL_MAP_NAME \
      --url-map-region=us-west1 \
      --region=us-west1 \
      --project SERVICE_PROJECT_A_ID
    

    For HTTPS:

    Create a regional SSL certificate using the gcloud compute ssl-certificates create command.

    gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
      --certificate=PATH_TO_CERTIFICATE \
      --private-key=PATH_TO_PRIVATE_KEY \
      --region=us-west1 \
      --project=SERVICE_PROJECT_A_ID
    

    Use the regional SSL certificate to create a target proxy with the gcloud compute target-https-proxies create command.

    gcloud compute target-https-proxies create HTTPS_TARGET_PROXY_NAME \
      --url-map=URL_MAP_NAME \
      --region=us-west1 \
      --ssl-certificates=SSL_CERTIFICATE_NAME \
      --project=SERVICE_PROJECT_A_ID
    
  3. Create the forwarding rule. For the forwarding rule's IP address, use the lb-frontend-and-backend-subnet. If you try to use the proxy-only subnet, forwarding rule creation fails.

    For HTTP:

    gcloud compute forwarding-rules create FORWARDING_RULE_NAME \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
      --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
      --address=10.1.2.99 \
      --ports=80 \
      --region=us-west1 \
      --target-http-proxy=HTTP_TARGET_PROXY_NAME \
      --target-http-proxy-region=us-west1 \
      --project=SERVICE_PROJECT_A_ID
    

    For HTTPS:

    gcloud compute forwarding-rules create FORWARDING_RULE_NAME \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=projects/HOST_PROJECT_ID/global/networks/lb-network \
      --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \
      --address=10.1.2.99 \
      --ports=443 \
      --region=us-west1 \
      --target-https-proxy=HTTPS_TARGET_PROXY_NAME \
      --target-https-proxy-region=us-west1 \
      --project=SERVICE_PROJECT_A_ID
    
  4. To test the load balancer, use the steps described in Test the load balancer.

What's next