Internal TCP/UDP Load Balancing and connected networks

This page describes scenarios for accessing an internal load balancer in your Virtual Private Cloud (VPC) network from a connected network. Before reviewing the information on this page, you should already be familiar with the concepts in the following guide:

Using VPC Network Peering

When you use VPC Network Peering to connect your VPC network to another network, Google Cloud shares subnet routes between the networks. The subnet routes allow traffic from the peer network to reach internal load balancers in your network. Access is allowed if the following is true:

  • For internal TCP/UDP load balancers, you must create ingress firewall rules to allow traffic from client VMs in the peer network. Google Cloud firewall rules aren't shared among networks when using VPC Network Peering.
  • Client virtual machine (VM) instances in the peer network are located in the same region as your internal load balancer — unless (for internal TCP/UDP load balancer only) you configure global access. With global access configured, client VM instances from any region of the peered VPC network can access your internal TCP/UDP load balancer. Global access isn't supported for Internal HTTP(S) Load Balancing.

You cannot selectively share only some internal TCP/UDP load balancers or internal HTTP(S) load balancers using VPC Network Peering. All internal load balancers are shared automatically. You can limit access to the load balancer's backends in the following ways:

  • For internal TCP/UDP load balancers, you can use ingress firewall rules applicable to backend VM instances.

  • For internal HTTP(S) load balancers, you can configure the backend VMs or endpoints to control access using HTTP headers (for example, X-Forwarded-For).

Using Cloud VPN and Cloud Interconnect

You can access an internal load balancer from a peer network that is connected through a Cloud VPN tunnel or VLAN attachment for a Dedicated Interconnect connection or Partner Interconnect. The peer network can be an on-premises network, another Google Cloud VPC network, or a virtual network hosted by a different cloud provider.

Access through Cloud VPN tunnels

You can access an internal load balancer through a Cloud VPN tunnel when all of the following conditions are met.

In the internal load balancer's network

  • Both the Cloud VPN gateway and the tunnel are located in the same region as the internal load balancer's components (unless, for internal TCP/UDP load balancer only, you configure global access).

  • Routes provide paths for egress traffic back to the peer (client) network. If you're using Cloud VPN tunnels with dynamic routing, consider the dynamic routing mode of the load balancer's Cloud VPN network. The dynamic routing mode determines which custom dynamic routes are available to the backends.

  • For internal TCP/UDP load balancer, you've configured firewall rules so that on-premises clients can communicate with the load balancer's backend VMs.

In the peer network

The peer network must have at least one Cloud VPN tunnel with routes to the subnet where the internal load balancer is defined.

If the peer network is another Google Cloud VPC network:

  • The peer network's Cloud VPN gateway can be located in any region.

  • For Cloud VPN tunnels that use dynamic routing, the dynamic routing mode of the VPC network determines which routes are available to clients in each region. To provide a consistent set of custom dynamic routes to clients in all regions, use global dynamic routing mode.

The following diagram highlights key concepts when accessing an internal load balancer by way of a Cloud VPN gateway and its associated tunnel. Cloud VPN securely connects your on-premises network to your Google Cloud VPC network using Cloud VPN tunnels.

Internal TCP/UDP Load Balancing and Cloud VPN (click to enlarge)
Internal load balancing and Cloud VPN (click to enlarge)

Note the following configuration elements associated with this example:

  • In the lb-network, a Cloud VPN tunnel that uses dynamic routing has been configured. The VPN tunnel, gateway, and Cloud Router are all located in us-west1, the same region where the internal load balancer's components are located.
  • Ingress allow firewall rules have been configured to apply to the backend VMs in the instance groups ig-a and ig-c so that they can receive traffic from IP addresses in the VPC network and from the on-premises network, 10.1.2.0/24 and 192.168.1.0/24. No egress deny firewall rules have been created, so the implied allow egress rule applies.
  • Packets sent from clients in the on-premises networks, including from 192.168.1.0/24, to the IP address of the internal load balancer, 10.1.2.99, are delivered directly to a healthy backend VM, such as vm-a2, according to the configured session affinity.
  • Replies sent from the backend VMs (such as vm-a2) are delivered through the VPN tunnel to the on-premises clients.

To troubleshoot Cloud VPN, see Cloud VPN troubleshooting.

Access through Cloud Interconnect

You can access an internal load balancer from an on-premises peer network that is connected to the load balancer's VPC network when all the following conditions are met in the internal load balancer's network:

  • Both the VLAN attachment and its Cloud Router are located in the same region as the load balancer's components (unless, for internal TCP/UDP load balancer only, you configure global access).

  • On-premises routers share appropriate routes that provide return paths for responses from backend VMs to the on-premises clients. Interconnect attachments (VLANs) for both Dedicated Interconnect and Partner Interconnect use Cloud Routers. The set of custom dynamic routes they learn depends on the dynamic routing mode of the load balancer's network.

  • For internal TCP/UDP load balancer, you've configured firewall rules so that on-premises clients can communicate with the load balancer's backend VMs.

In the on-premises network, at least one VLAN attachment must exist with appropriate routes whose destinations include the subnet where the internal load balancer is defined. Egress firewall rules must be appropriately configured as well.

Global access for Internal TCP/UDP Load Balancing

When you configure global access for Internal TCP/UDP Load Balancing, the following resources can be located in any region:

  • Cloud Routers
  • Cloud VPN gateways and tunnels
  • Cloud Interconnect attachments (VLANs)

In the diagram:

  • The Cloud Router is in the europe-west1 region.
  • The internal TCP/UDP load balancer's frontend and backends are in the us-east1 region.
  • The Cloud Router peers with the on-premises VPN router.
  • The Border Gateway Protocol (BGP) peering session can be through Cloud VPN or Cloud Interconnect with Direct Peering or Partner Interconnect.
Internal TCP/UDP Load Balancing with global access (click to enlarge)
Internal TCP/UDP Load Balancing with global access (click to enlarge)

The VPC network's dynamic routing mode is set to global to enable the Cloud Router in europe-west1 to advertise the subnet routes for subnets in any region of the internal TCP/UDP load balancer's VPC network.

Multiple egress paths

In production environments, you should use multiple Cloud VPN tunnels or Cloud Interconnect attachments (VLANs) for redundancy. This section discusses requirements when using multiple tunnels or VLANs.

In the following diagram, two Cloud VPN tunnels connect lb-network to an on-premises network. Although Cloud VPN tunnels are used here, the same principles apply to Cloud Interconnect.

Internal TCP/UDP Load Balancing and multiple Cloud VPN tunnels (click to
       enlarge)
Internal load balancing and multiple Cloud VPN tunnels (click to enlarge)

You must configure each tunnel or each Cloud Interconnect attachment (VLAN) in the same region as the internal load balancer (unless, for Internal TCP/UDP Load Balancing, you've enabled global access). Multiple tunnels or VLANs can provide additional bandwidth or can serve as standby paths for redundancy.

Keep in mind the following points:

  • If the on-premises network has two routes with the same priorities, each with a destination of 10.1.2.0/24 and a next hop corresponding to a different VPN tunnel in the same region as the internal load balancer, traffic can be sent from the on-premises network (192.168.1.0/24) to the load balancer by using equal-cost multipath (ECMP).
  • After packets are delivered to the VPC network, the internal load balancer distributes them to backend VMs according to the configured session affinity.
  • If the lb-network has two routes, each with the destination 192.168.1.0/24 and a next hop corresponding to different VPN tunnels, responses from backend VMs can be delivered over each tunnel according to the priority of the routes in the network. If different route priorities are used, one tunnel can serve as a backup for the other. If the same priorities are used, responses are delivered by using ECMP.
  • Replies sent from the backend VMs (such as vm-a2) are delivered directly to the on-premises clients through the appropriate tunnel. From the perspective of lb-network, if routes or VPN tunnels change, traffic might egress by using a different tunnel. This might result in TCP session resets if an in-progress connection is interrupted.

What's next