Set up an internal regional TCP proxy load balancer with hybrid connectivity

Stay organized with collections Save and categorize content based on your preferences.

The internal regional TCP proxy load balancer is a proxy-based regional Layer 4 load balancer that enables you to run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same Virtual Private Cloud (VPC) network or clients connected to your VPC network. If you want to make the service available to clients in other VPC networks, you can use Private Service Connect to publish the service.

This page describes how to configure an internal regional TCP proxy load balancer to load balance traffic to backends on-premises or in other cloud environments that are connected by using hybrid connectivity. Configuring hybrid connectivity to connect your networks to Google Cloud is not in scope for this page.

Overview

In this example, we'll use the load balancer to distribute TCP traffic across backend VMs located on-premises or in other cloud environments.

In this example, you configure the following deployment:

Internal regional TCP proxy load balancer example configuration with hybrid NEG backends.
Internal regional TCP proxy load balancer example configuration with hybrid NEG backends

The internal regional TCP proxy load balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.

Permissions

You must have the following permissions to set up hybrid load balancing:

  • On Google Cloud

    • Permission to establish hybrid connectivity between Google Cloud and your on-premises or other cloud environments. For the list of permissions needed, see the relevant Network Connectivity product documentation.
    • Additionally, to follow the instructions on this page, you need permissions to create a hybrid connectivity NEG and the load balancer. The Compute Load Balancer Admin role (roles/compute.loadBalancerAdmin) contains the permission required to perform the tasks described in this guide.
  • On your on-premises or other non-Google Cloud cloud environment

    • Permission to configure network endpoints that allow services on your on-premises or other cloud environments to be reachable from Google Cloud by using an IP:Port combination. Contact your environment's network administrator for details.
    • Permission to create firewall rules on your on-premises or other cloud environments to allow Google's health check probes to reach the endpoints.

Additionally, to follow the instructions on this page, you create a hybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints) to serve as Google Cloud-based backends for the load balancer.

You should be either a project owner or editor, or you should have the following Compute Engine IAM roles.

Task Required role
Create and modify load balancer components Network Admin
Create and modify NEGs Compute Instance Admin
Add and remove firewall rules Security Admin

Establish hybrid connectivity

Your Google Cloud and on-premises or other cloud environments must be connected through hybrid connectivity, using either Cloud Interconnect VLAN attachments or Cloud VPN tunnels with Cloud Router. We recommend you use a high availability connection.

A Cloud Router enabled with Global dynamic routing learns about the specific endpoint via BGP and programs it into your Google Cloud VPC network. Regional dynamic routing is not supported. Static routes are also not supported.

The Google Cloud VPC network that you use to configure either Cloud Interconnect or Cloud VPN is the same network you use to configure the hybrid load balancing deployment. Ensure that your VPC network's subnet CIDR ranges do not conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.

For instructions, see:

Do not proceed with the instructions on this page until you have set up hybrid connectivity between your environments.

Set up your environment that is outside Google Cloud

Perform the following steps to set up your on-premises or other cloud environment for hybrid load balancing:

  • Configure network endpoints to expose on-premises services to Google Cloud (IP:Port).
  • Configure firewall rules on your on-premises or other cloud environment.
  • Configure Cloud Router to advertise certain required routes to your private environment.

Set up network endpoints

After you have set up hybrid connectivity, you configure one or more network endpoints within your on-premises or other cloud environments that are reachable via Cloud Interconnect or Cloud VPN using an IP:port combination. This IP:port combination will be configured as one or more endpoints for the hybrid connectivity NEG that will be created in Google Cloud later on in this process.

If there are multiple paths to the IP endpoint, routing follows the behavior described in the Cloud Router overview.

Set up firewall rules

The following firewall rules must be created on your on-premises or other cloud environment:

  • Create an ingress allow firewall rule to allow traffic from Google's health-checking probes to your endpoints. The source IP address ranges to be allowed are 35.191.0.0/16 and 130.211.0.0/22. For more details, see Probe IP ranges and firewall rules.

    Currently, health check probes for hybrid NEGs originate from Google's centralized health checking mechanism. If you cannot allow traffic that originates from the Google health check ranges to reach your hybrid endpoints and would prefer to have the health check probes originate from private IP addresses instead, speak to your Google account representative to get your project allowlisted for distributed Envoy health checks.

  • Create an ingress allow firewall rule to allow traffic that is being load-balanced to reach the endpoints.

  • Create an ingress allow firewall rule to allow traffic from the region's proxy-only subnet to reach the endpoints.

Configure Cloud Router to advertise the following custom IP ranges to your on-premises or other cloud environment:

  • The ranges used by Google's health check probes: 35.191.0.0/16 and 130.211.0.0/22.
  • The range of the region's proxy-only subnet.

Set up your Google Cloud environment

For the following steps, make sure you use the same VPC network (called NETWORK in this procedure) that was used to configure hybrid connectivity between the environments. You can select any subnet from this network to reserve the load balancer's IP address and create the load balancer. This subnet is referred to as LB_SUBNET in this procedure.

Additionally, make sure the region used (called REGION in this procedure) is the same as that used to create the Cloud VPN tunnel or Cloud Interconnect VLAN attachment.

Configure the proxy-only subnet

A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.

The proxy-only subnet is used by all Envoy-based regional load balancers in the REGION region of the NETWORK VPC network.

There can only be one active proxy-only subnet per region, per VPC network. You can skip this step if there's already a proxy-only subnet in this region.

Console

If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.
    Go to VPC networks
  2. Go to the network that was used to configure hybrid connectivity between the environments.
  3. Click Add subnet.
  4. Enter a Name: PROXY_ONLY_SUBNET_NAME.
  5. Select a Region: REGION.
  6. Set Purpose to Regional Managed Proxy.
  7. Enter an IP address range: PROXY_ONLY_SUBNET_RANGE.
  8. Click Add.

gcloud

Create the proxy-only subnet with the gcloud compute networks subnets create command.

gcloud compute networks subnets create PROXY_ONLY_SUBNET_NAME \
    --purpose=REGIONAL_MANAGED_PROXY \
    --role=ACTIVE \
    --region=REGION \
    --network=NETWORK \
    --range=PROXY_ONLY_SUBNET_RANGE

Reserve the load balancer's IP address

By default, one IP address is used for each forwarding rule. You can reserve a shared IP address, which lets you use the same IP address with multiple forwarding rules. However, if you want to publish the load balancer by using Private Service Connect do not use a shared IP address for the forwarding rule.

Console

You can reserve a standalone internal IP address using the Google Cloud console.

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click the network that was used to configure hybrid connectivity between the environments. In this case, NETWORK.
  3. Click Static internal IP addresses and then click Reserve static address.
  4. Enter a Name: LB_IP_ADDRESS.
  5. Select a Subnet: LB_SUBNET.
  6. If you want to specify which IP address to reserve, under Static IP address, select Let me choose, and then fill in a Custom IP address. Otherwise, the system automatically assigns an IP address in the subnet for you.
  7. If you want to use this IP address with multiple forwarding rules, under Purpose, choose Shared.
  8. Click Reserve to finish the process.

gcloud

Reserve a regional internal IP address for the load balancer's forwarding rule.

gcloud compute addresses create LB_IP_ADDRESS \
    --region=REGION \
    --subnet=LB_SUBNET

If you want to use the same IP address with multiple forwarding rules, specify --purpose=SHARED_LOADBALANCER_VIP.

Create firewall rules

In this example, you create the following firewall rules:

  • fw-allow-health-check: An ingress rule, applicable to the instances being load balanced, that allows traffic from the load balancer and Google Cloud health checking systems (130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag allow-health-check to identify the backend VMs to which it should apply.
  • fw-allow-ssh: An ingress rule that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the systems from which you will initiate SSH sessions. This example uses the target tag allow-ssh to identify the VMs to which it should apply.
  • fw-allow-proxy-only-subnet: An ingress rule that allows connections from the proxy-only subnet to reach the backends.

Console

  1. In the Google Cloud console, go to the Firewalls page.
    Go to Firewalls
  2. Click Create firewall rule to create the rule to allow traffic from health check probes:
    1. Enter a Name of fw-allow-health-check.
    2. Under Network, select the network that was used to configure hybrid connectivity between the environments. In this case, NETWORK.
    3. Under Targets, select Specified target tags.
    4. Populate the Target tags field with allow-health-check.
    5. Set Source filter to IPv4 ranges.
    6. Set Source IPv4 ranges to 130.211.0.0/22 and 35.191.0.0/16.
    7. Under Protocols and ports, select Specified protocols and ports.
    8. Select the TCP checkbox and then enter 80 for the port number.
    9. Click Create.
  3. Click Create firewall rule again to create the rule to allow incoming SSH connections:
    1. Name: fw-allow-ssh
    2. Network: NETWORK
    3. Priority: 1000
    4. Direction of traffic: ingress
    5. Action on match: allow
    6. Targets: Specified target tags
    7. Target tags: allow-ssh
    8. Source filter: IPv4 ranges
    9. Source IPv4 ranges: 0.0.0.0/0
    10. Protocols and ports: Choose Specified protocols and ports, and then enter tcp:22.
    11. Click Create.
  4. Click Create firewall rule again to create the rule to allow incoming connections from the proxy-only subnet:
    1. Name: fw-allow-ssh
    2. Network: NETWORK
    3. Priority: 1000
    4. Direction of traffic: ingress
    5. Action on match: allow
    6. Targets: Specified target tags
    7. Target tags: allow-proxy-only-subnet
    8. Source filter: IPv4 ranges
    9. Source IPv4 ranges: PROXY_ONLY_SUBNET_RANGE
    10. Protocols and ports: Choose Specified protocols and ports, and then enter tcp:80.
    11. Click Create.

gcloud

  1. Create the fw-allow-health-check-and-proxy rule to allow the Google Cloud health checks to reach the backend instances on TCP port 80:

    gcloud compute firewall-rules create fw-allow-health-check \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp:80
    
  2. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  3. Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port 80:

    gcloud compute firewall-rules create fw-allow-proxy-only-subnet \
        --network=NETWORK \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-proxy-only-subnet \
        --source-ranges=PROXY_ONLY_SUBNET_RANGE \
        --rules=tcp:80
    

Set up the hybrid connectivity NEG

When creating the NEG, use a ZONE that minimizes the geographic distance between Google Cloud and your on-premises or other cloud environment. For example, if you are hosting a service in an on-premises environment in Frankfurt, Germany, you can specify the europe-west3-a Google Cloud zone when you create the NEG.

Moreover, the ZONE used to create the NEG should be in the same region where the Cloud VPN tunnel or Cloud Interconnect VLAN attachment were configured for hybrid connectivity.

For the available regions and zones, see the Compute Engine documentation: Available regions and zones.

Console

To create a hybrid connectivity NEG:

  1. Go to the Network Endpoint Groups page in the Google Cloud console.
    Go to Network endpoint groups
  2. Click Create network endpoint group.
  3. Enter a Name for the hybrid NEG. Referred to as HYBRID_NEG_NAME in this procedure.
  4. Select the Network endpoint group type: Hybrid connectivity network endpoint group (Zonal).
  5. Select the Network: NETWORK
  6. Select the Subnet: LB_SUBNET
  7. Select the Zone: HYBRID_NEG_ZONE
  8. Enter the Default port.
  9. Click Create

Add endpoints to the hybrid connectivity NEG:

  1. Go to the Network Endpoint Groups page in the Google Cloud console.
    Go to the Network Endpoint Groups page
  2. Click the Name of the network endpoint group created in the previous step (HYBRID_NEG_NAME). You see the Network endpoint group details page.
  3. In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
  4. Enter the IP address of the new network endpoint.
  5. Select the Port type.
    1. If you select Default, the endpoint uses the default port for all endpoints in the network endpoint group.
    2. If you select Custom, you can enter a different Port number for the endpoint to use.
  6. To add more endpoints, click Add network endpoint and repeat the previous steps.
  7. After you add all the non-Google Cloud endpoints, click Create.

gcloud

  1. Create a hybrid connectivity NEG using the gcloud compute network-endpoint-groups create command.

    gcloud compute network-endpoint-groups create HYBRID_NEG_NAME \
       --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \
       --zone=HYBRID_NEG_ZONE \
       --network=NETWORK
    
  2. Add the on-premises IP:Port endpoint to the hybrid NEG:

    gcloud compute network-endpoint-groups update HYBRID_NEG_NAME \
        --zone=HYBRID_NEG_ZONE \
        --add-endpoint="ip=ENDPOINT_IP_ADDRESS,port=ENDPOINT_PORT"
    

You can use this command to add the network endpoints you previously configured on-premises or in your cloud environment. Repeat --add-endpoint as many times as needed.

You can repeat these steps to create multiple hybrid NEGs if needed.

Configure the load balancer

Console

Start the configuration

  1. Go to the Load balancing page in the Google Cloud console.
    Go to the Load balancing page
  2. Click Create load balancer.
  3. Under TCP load balancing, click Start configuration.
  4. For Internet facing or internal only, select Only between my VMs.
  5. For Multiple regions or single region, select Single region.
  6. For Load balancer type, select Proxy.
  7. Click Continue.
  8. Enter a Name for the load balancer.
  9. Select the Region: REGION.
  10. Select the Network: NETWORK.

Reserve a proxy-only subnet

To reserve a proxy-only subnet:

  1. Click Reserve subnet.
  2. Enter the Name: PROXY_ONLY_SUBNET_NAME.
  3. Enter an IP address range: PROXY_ONLY_SUBNET_RANGE.
  4. Click Add.

Backend configuration

  1. Click Backend configuration.
  2. For Backend type, select Hybrid connectivity network endpoint group (Zonal).
  3. For Protocol, select TCP.
  4. Under New backend, select the hybrid NEG created previously: HYBRID_NEG_NAME. Or, you can click Create a network endpoint group to create the hybrid NEG now. For guidance on configuring the NEG, see Set up the hybrid NEG.
  5. Retain the remaining default values and click Done.
  6. Configure the health check:
    1. Under Health check, select Create a health check.
    2. Enter a Name for the health check.
    3. For Protocol, select TCP.
    4. For Port, enter 80.
  7. Retain the remaining default values and click Save.
  8. In the Google Cloud console, verify that there is a check mark next to Backend configuration. If not, double-check that you have completed all of the steps.

Frontend configuration

  1. Click Frontend configuration.
  2. Enter a Name for the forwarding rule.
  3. For Subnetwork, select LB_SUBNET.
  4. For IP address, select LB_IP_ADDRESS.
  5. For Port number, enter any port number from 1-65535. The forwarding rule only forwards packets with a matching destination port.
  6. Enable Proxy Protocol only if it works with the service running on your on-premise or other cloud endpoints. For example, PROXY protocol doesn't work with the Apache HTTP Server software. For more information, see Proxy protocol.
  7. Click Done.
  8. In the Google Cloud console, verify that there is a check mark next to Frontend configuration. If not, double-check that you have completed all the previous steps.

Review and finalize

  1. Click Review and finalize.
  2. Double-check your settings.
  3. Click Create.

gcloud

  1. Create a regional health check for the backends.

    gcloud compute health-checks create tcp TCP_HEALTH_CHECK_NAME \
        --region=REGION \
        --use-serving-port
    
  2. Create a backend service.

    gcloud compute backend-services create BACKEND_SERVICE_NAME \
       --load-balancing-scheme=INTERNAL_MANAGED \
       --protocol=TCP \
       --region=REGION \
       --health-checks=TCP_HEALTH_CHECK_NAME \
       --health-checks-region=REGION
    
  3. Add the hybrid NEG backend to the backend service.

    gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
       --network-endpoint-group=HYBRID_NEG_NAME \
       --network-endpoint-group-zone=HYBRID_NEG_ZONE \
       --region=REGION \
       --balancing-mode=CONNECTION \
       --max-connections=MAX_CONNECTIONS
    

    For MAX_CONNECTIONS, enter the maximum concurrent connections that the backend should handle.

  4. Create the target TCP proxy.

    gcloud compute target-tcp-proxies create TARGET_TCP_PROXY_NAME \
       --backend-service=BACKEND_SERVICE_NAME \
       --region=REGION
    
  5. Create the forwarding rule.

    Create the forwarding rule using the gcloud compute forwarding-rules create command.

    Replace FWD_RULE_PORT with a single port number from 1-65535. The forwarding rule only forwards packets with a matching destination port.

    gcloud compute forwarding-rules create FORWARDING_RULE \
       --load-balancing-scheme=INTERNAL_MANAGED \
       --network=NETWORK \
       --subnet=LB_SUBNET \
       --address=LB_IP_ADDRESS \
       --ports=FWD_RULE_PORT \
       --region=REGION \
       --target-tcp-proxy=TARGET_TCP_PROXY_NAME \
       --target-tcp-proxy-region=REGION
    

Test the load balancer

To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.

Create a client VM

Create a client VM (client-vm) in the same region as the load balancer.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Click Create instance.

  3. Set Name to client-vm.

  4. Set Zone to CLIENT_VM_ZONE.

  5. Click Advanced options.

  6. Click Networking and configure the following fields:

    1. For Network tags, enter allow-ssh.
    2. For Network interfaces, select the following:
      • Network: NETWORK
      • Subnet: LB_SUBNET
  7. Click Create.

gcloud

The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.

gcloud compute instances create client-vm \
    --zone=CLIENT_VM_ZONE \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --tags=allow-ssh \
    --subnet=LB_SUBNET

Send traffic to the load balancer

Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.

  1. Connect via SSH to the client instance.

    gcloud compute ssh client-vm \
      --zone=CLIENT_VM_ZONE
    
  2. Verify that the load balancer is serving backend hostnames as expected.

    1. Use the compute addresses describe command to view the load balancer's IP address:

      gcloud compute addresses describe LB_IP_ADDRESS \
        --region=REGION
      

      Make a note of the IP address.

    2. Send traffic to the load balancer on the IP address and port specified when creating the load balancer forwarding rule. Testing whether the hybrid NEG backends are responding to requests depends on the service running on the non-Google Cloud endpoints.

Optional: Publish the service by using Private Service Connect

An internal regional TCP proxy load balancer with hybrid connectivity lets you make a service that is hosted in on-premises or other cloud environments available to clients in your VPC network.

If you want to make the hybrid service available in other VPC networks, you can use Private Service Connect to publish the service. By placing a service attachment in front of your internal regional TCP proxy load balancer, you can let clients in other VPC networks reach the hybrid services running in on-premises or other cloud environments.

Using Private Service Connect to publish a hybrid service.
Using Private Service Connect to publish a hybrid service (click to enlarge).

What's next