Setting up an external HTTP(S) load balancer with hybrid connectivity

This page illustrates how to deploy an external HTTP(S) load balancer to load-balance traffic to network endpoints that are on-premises or in other public clouds and are reachable via hybrid connectivity.

After you complete these tasks, you can optionally explore enabling additional services (such as Cloud CDN and Google Cloud Armor) and advanced traffic management features.

If you haven't already done so, review the Hybrid load balancing overview to understand the network requirements to set up hybrid load balancing.

Setup overview

The example on this page sets up the following sample deployment:

External HTTP(S) load balancer example for hybrid connectivity (click to enlarge)
External HTTP(S) load balancer example for hybrid connectivity (click to enlarge)

You must configure hybrid connectivity before you attempt to set up a hybrid load balancing deployment. This topic does not include the hybrid connectivity setup.

Depending on your choice of hybrid connectivity product (either Cloud VPN or Cloud Interconnect (Dedicated or Partner)), use the relevant product documentation to configure this.

Permissions

You must have the following permissions to set up hybrid load balancing:

  • On Google Cloud

    • Permission to establish hybrid connectivity between Google Cloud and your on-premises or other cloud environments the environments. For the list of permissions needed, see the relevant Network connectivity product documentation.
  • On your on-premises or other non-Google Cloud cloud environment

    • Permission to configure network endpoints that allow services on your on-premises or other cloud environments to be reachable from Google Cloud via an IP:Port combination. Contact your environment's network administrator for details.
    • Permission to create firewall rules on your on-premises or other cloud environments to allow Google's health check probes to reach the endpoints.

Additionally, to follow the instructions on this page, you create a hybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints) to serve as Google Cloud-based backends for the load balancer.

You should be either a project owner or editor, or you should have the following Compute Engine IAM roles.

Task Required role
Create and modify load balancer components Network Admin
Create and modify NEGs Compute Instance Admin
Add and remove firewall rules Security Admin

Establish hybrid connectivity

Your Google Cloud and on-premises or other cloud environments must be connected through hybrid connectivity, using either Cloud Interconnect VLAN attachments or Cloud VPN tunnels with Cloud Router. We recommend you use a high availability connection.

A Cloud Router enabled with Global dynamic routing learns about the specific endpoint via BGP and programs it into your Google Cloud VPC network. Regional dynamic routing is not supported. Static routes are also not supported.

The Google Cloud VPC network that you use to configure either Cloud Interconnect or Cloud VPN is the same network you use to configure the hybrid load balancing deployment. Ensure that your VPC network's subnet CIDR ranges do not conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.

For instructions, see:

Do not proceed with the instructions on this page until you have set up hybrid connectivity between your environments.

Set up your on-premises or other cloud environment

Perform the following steps to set up your on-premises or other cloud environment for hybrid load balancing:

  • Configure network endpoints to expose on-premises services to Google Cloud (IP:Port).
  • Configure firewall rules on your on-premises or other cloud environment.
  • Configure Cloud Router to advertise certain required routes to your private environment.

Set up network endpoints

After you have set up hybrid connectivity, you configure one or more network endpoints within your on-premises or other cloud environments that are reachable via Cloud Interconnect or Cloud VPN using an IP:port combination. This IP:port combination will be configured as one or more endpoints for the hybrid connectivity NEG that will be created in Google Cloud later on in this process.

If there are multiple paths to the IP endpoint, routing will follow the behavior described in the Cloud Router overview.

Set up firewall rules

The following firewall rules must be created on your on-premises or other cloud environment:

  • Ingress allow firewall rules to allow traffic from Google's health-checking probes to your endpoints. For external HTTP(S) load balancer, internal HTTP(S) load balancer, TCP proxy load balancer, and SSL proxy load balancer, the ranges to be allowed are: 35.191.0.0/16 and 130.211.0.0/22. For more details, see Probe IP ranges and firewall rules.
  • Ingress allow firewall rules to allow traffic that is being load-balanced to reach the endpoints.

Configure Cloud Router to advertise the following routes to your on-premises or other cloud environment:

  • The ranges used by Google's health check probes: 35.191.0.0/16 and 130.211.0.0/22.

Set up Google Cloud environment

For the following steps, make sure you use the same VPC network that was used to configure hybrid connectivity between the environments.

Create firewall rule

In this example, you create the following firewall rule:

  • fw-allow-health-check: An ingress rule, applicable to the Google Cloud instances being load balanced, that allows traffic from the load balancer and Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag allow-health-check to identify the backend VMs to which it should apply.

Console

  1. Go to the Firewalls page in the Google Cloud Console.
    Go to the Firewalls page
  2. Click Create firewall rule:
    1. Enter a Name of fw-allow-health-check.
    2. Under Network, select NETWORK.
    3. Under Targets, select Specified target tags.
    4. Populate the Target tags field with allow-health-check.
    5. Set Source filter to IP ranges.
    6. Set Source IP ranges to 130.211.0.0/22 and 35.191.0.0/16.
    7. Under Protocols and ports, select Specified protocols and ports.
    8. Select the checkbox next to tcp and type 80 for the port numbers.
    9. Click Create.

gcloud

  1. Create the fw-allow-health-check-and-proxy rule to allow the load balancer and Google Cloud health checks to communicate with backend instances on TCP port 80.

    Replace NETWORK with the name of your VPC network.

    gcloud compute firewall-rules create fw-allow-health-check \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp:80
    

Set up the zonal NEG

For Google Cloud-based backends, we recommend you configure multiple zonal NEGs in the same region where you configured hybrid connectivity.

For this example, we set up a zonal NEG (with GCE_VM_IP_PORT type endpoints) in the REGION region. First create the VMs. Then create a zonal NEG and add the VMs' network endpoints to the NEG.

Create VMs

Console

  1. Go to the VM instances page in the Google Cloud Console.
    Go to the VM instances page
  2. Click Create instance.
  3. Set the Name to vm-a1.
  4. For the Region, choose REGION, and choose any Zone.
  5. In the Boot disk section, ensure that the Debian operating system and the 10 (buster) version are selected for the boot disk options. Click Choose to change the image if necessary.
  6. Click Management, security, disks, networking, sole tenancy and make the following changes:

    • Click Networking and add the following Network tags: allow-ssh and allow-health-check.
    • Click Edit under Network interfaces and make the following changes then click Done:
      • Network: NETWORK
      • Subnet: SUBNET
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
    • Click Management. In the Startup script field, copy and paste the following script contents. The script contents are identical for all four VMs:

      #! /bin/bash
      apt-get update
      apt-get install apache2 -y
      vm_hostname="$(curl -H "Metadata-Flavor:Google" \
      http://169.254.169.254/computeMetadata/v1/instance/name)"
      echo "Page served from: $vm_hostname" | \
      tee /var/www/html/index.html
      
  7. Click Create.

  8. Repeat the following steps to create a second VM, using the following name and zone combination:

    • Name: vm-a2, zone: ZONE

gcloud

Create the VMs by running the following command two times, using these combinations for VM_NAME and ZONE. The script contents are identical for both VMs.

  • VM_NAME of vm-a1 and any ZONE of your choice
  • VM_NAME of vm-a2 and the same ZONE

    gcloud compute instances create VM_NAME \
        --zone=ZONE \
        --image-family=debian-10 \
        --image-project=debian-cloud \
        --tags=allow-ssh,allow-health-check \
        --subnet=SUBNET \
        --metadata=startup-script='#! /bin/bash
         apt-get update
         apt-get install apache2 -y
         vm_hostname="$(curl -H "Metadata-Flavor:Google" \
         http://169.254.169.254/computeMetadata/v1/instance/name)"
         echo "Page served from: $vm_hostname" | \
         tee /var/www/html/index.html
         systemctl restart apache2'
    

Create the zonal NEG

gcloud

  1. Create a zonal NEG (with GCE_VM_IP_PORT endpoints) using the gcloud compute network-endpoint-groups create command:

    gcloud compute network-endpoint-groups create GCP_NEG_NAME \
        --network-endpoint-type=GCE_VM_IP_PORT \
        --zone=ZONE \
        --network=NETWORK \
        --subnet=SUBNET
        [--default-port=DEFAULT_PORT]
    

    You can either specify a DEFAULT_PORT while creating the NEG, or specify a port number for each endpoint in the next step.

  2. Add endpoints to GCP_NEG_NAME.

    gcloud compute network-endpoint-groups update GCP_NEG_NAME \
        --zone=ZONE \
        --add-endpoint='instance=vm-a1,[port=PORT_VM_A1]' \
        --add-endpoint='instance=vm-a2,[port=PORT_VM_A2]'
    

Set up the hybrid connectivity NEG

When creating the NEG, use a ZONE that minimizes the geographic distance between Google Cloud and your on-premises or other cloud environment. For example, if you are hosting a service in an on-premises environment in Frankfurt, Germany, you can specify the europe-west3-a Google Cloud zone when you create the NEG.

Moreover, if you're using Cloud Interconnect, the ZONE used to create the NEG should be in the same region where the hybrid connectivity Cloud Interconnect VLAN attachment was configured.

For the available regions and zones, see the Compute Engine documentation: Available regions and zones.

Console

gcloud

  1. Create a hybrid connectivity NEG using the gcloud compute network-endpoint-groups create command.

    gcloud compute network-endpoint-groups create ON_PREM_NEG_NAME \
        --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \
        --zone=ZONE \
        --network=NETWORK
    
  2. Add the endpoint to on-prem-neg:

    gcloud compute network-endpoint-groups update ON_PREM_NEG_NAME \
        --zone=ZONE \
        --add-endpoint="ip=ON_PREM_IP_ADDRESS_1,port=PORT_1"
        --add-endpoint="ip=ON_PREM_IP_ADDRESS_2,port=PORT_2"
    

You can use this command to add the network endpoints you previously configured on-premises or in your cloud environment. Repeat --add-endpoint as many times as needed.

You can repeat these steps to create multiple hybrid NEGs if needed.

Configure the load balancer

Console

gcloud

  1. Create a global static external IP address to which external clients send traffic.
        gcloud compute addresses create LB_IP_ADDRESS_NAME \
            --global
        
  2. Create a health check for the backends.
      gcloud compute health-checks create http HTTP_HEALTH_CHECK_NAME \
          --use-serving-port
      
  3. Create a backend service for Google Cloud-based backends.
      gcloud compute backend-services create BACKEND_SERVICE_GCP \
          --health-checks=HTTP_HEALTH_CHECK_NAME \
          --global
      
  4. Add the zonal NEG as a backend to the backend service:
      gcloud compute backend-services add-backend BACKEND_SERVICE_GCP \
          --global \
          --network-endpoint-group=GCP_NEG_NAME \
          --network-endpoint-group-region=REGION
       
  5. Create a backend service for the on-premises backend.
      gcloud compute backend-services create BACKEND_SERVICE_ON_PREM \
          --health-checks=HTTP_HEALTH_CHECK_NAME \
          --global
      
  6. Add the hybrid NEG as a backend to the backend service:
    gcloud compute backend-services add-backend BACKEND_SERVICE_ON_PREM \
        --global \
        --network-endpoint-group=ON_PREM_NEG_NAME \
        --network-endpoint-group-region=REGION
    
  7. Create a URL map to route incoming requests to the backend services. For example, the following URL map uses BACKEND_SERVICE_GCP as the default service.
      gcloud compute url-maps create URL_MAP_NAME \
          --default-service BACKEND_SERVICE_GCP
      
    Configure the URL map so that requests are directed to both on-prem and Google Cloud backend services. For example, you can create a path matcher so that requests matching the /on-prem-service path are sent to BACKEND_SERVICE_ON_PREM and all other requests are sent to BACKEND_SERVICE_GCP.
      gcloud compute url-maps add-path-matcher URL_MAP_NAME \
          --default-service BACKEND_SERVICE_GCP \
          --path-matcher-name PATH_MATCHER \
          --path-rules="/on-prem-service=BACKEND_SERVICE_ON_PREM"]
     
    You can also direct traffic to specific backend services based on the host component of the HTTP(S) request. For details, see Using URL maps.
  8. Perform this step only if you want to create an HTTPS load balancer. This is not required for HTTP load balancers.
    To create an HTTPS load balancer, you must have an SSL certificate resource to use in the HTTPS target proxy. You can create an SSL certificate resource using either a Google-managed SSL certificate or a self-managed SSL certificate. Using Google-managed certificates is recommended because Google Cloud obtains, manages, and renews these certificates automatically.

    To create a Google-managed certificate, you must have a domain. If you do not have a domain, you can use a self-signed SSL certificate for testing.

    To create a Google-managed SSL certificate resource:
    gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
        --domains DOMAIN
    
    To create a self-managed SSL certificate resource:
    gcloud compute ssl-certificates create SSL_CERTIFICATE_NAME \
        --certificate CRT_FILE_PATH \
        --private-key KEY_FILE_PATH
    
  9. Create a target HTTP(S) proxy to route requests to your URL map.

    For an HTTP load balancer, create an HTTP target proxy:
    gcloud compute target-http-proxies create TARGET_HTTP_PROXY_NAME \
        --url-map=URL_MAP_NAME
    
    For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.
    gcloud compute target-https-proxies create TARGET_HTTPS_PROXY_NAME \
        --ssl-certificates=SSL_CERTIFICATE_NAME \
        --url-map=URL_MAP_NAME
    
  10. Create a forwarding rule to route incoming requests to the proxy.

    For an HTTP load balancer:
    gcloud compute forwarding-rules create HTTP_FORWARDING_RULE_NAME \
        --address=LB_IP_ADDRESS_NAME \
        --target-http-proxy=TARGET_HTTP_PROXY_NAME \
        --global \
        --ports=80
    
    For an HTTPS load balancer:
    gcloud compute forwarding-rules create HTTPS_FORWARDING_RULE_NAME \
        --address=LB_IP_ADDRESS_NAME \
        --target-https-proxy=TARGET_HTTPS_PROXY_NAME \
        --global \
        --ports=443
    

Testing the load balancer

Now that you have configured your load balancer, you can start sending traffic to the load balancer's IP address.

  1. Go to the Load balancing page in the Google Cloud Console.
    Go to the Load balancing page
  2. Click on the load balancer you just created.
  3. Note the IP Address of the load balancer.
  4. If you created an HTTP load balancer, you can test your load balancer using a web browser by going to http://IP_ADDRESS. Replace IP_ADDRESS with the load balancer's IP address. You should be directed to the service you have exposed through the endpoint.

    If you created an HTTPS load balancer, you can test your load balancer by using curl as follows. Replace IP_ADDRESS with the load balancer's IP address. You should be directed to the service you have exposed through the endpoint.

   curl -k https://IP_ADDRESS
   

To confirm that the non-Google Cloud endpoints are responding, run:

   curl -k https://IP_ADDRESS/on-prem-service
   

If that does not work and you are using a Google-managed certificate, confirm that your certificate resource's status is ACTIVE. For more information, see Google-managed SSL certificate resource status.

If you are using a Google-managed certificate, test the domain pointing to the load balancer's IP address. For example:

   curl -s 'https://test.example.com' --resolve test.example.com:443:IP_ADDRESS
   

What's next