Internal TCP/UDP Load Balancing and Connected Networks

Overview

This page describes scenarios for accessing an internal TCP/UDP load balancer in your VPC network from a connected network. Before reviewing the information on this page, you should already be familiar with the Internal TCP/UDP Load Balancing Concepts.

Using VPC Network Peering

When you use VPC Network Peering to connect your VPC network to another network, GCP shares subnet routes between the networks. The subnet routes allow traffic from the peer network to reach internal TCP/UDP load balancers in your network. Access is allowed if:

  • Client VMs in the peer network are located in the same region as your internal TCP/UDP load balancer.
  • Ingress firewall rules allow traffic from client VMs in the peer network. GCP firewall rules are not shared among networks when using VPC Network Peering.

You cannot selectively share only some of your internal TCP/UDP load balancers using VPC Network Peering. However, you can limit access to the load balancer's backend VM instances with firewall rules.

Using Cloud VPN and Cloud Interconnect

You can access an internal TCP/UDP load balancer from a peer network that is connected through a Cloud VPN tunnel or VLAN attachment for a Dedicated Interconnect or Partner Interconnect. The peer network can be an on-premises network, another GCP VPC network, or a virtual network hosted by a different cloud provider.

Access through Cloud VPN tunnels

You can access an internal TCP/UDP load balancer through a Cloud VPN tunnel when all of the following conditions are met in the internal TCP/UDP load balancer's network:

  • Both the Cloud VPN gateway and tunnel are located in the same region as the internal TCP/UDP load balancer's components.

  • Appropriate routes provide paths for egress traffic back to the clients from which the load balancer traffic originated. Cloud VPN tunnels using dynamic routing rely on a Cloud Router to manage custom dynamic routes that serve this purpose. Cloud VPN tunnels using static routing require custom static routes, which are created automatically if you create tunnels using the GCP Console.

  • Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backends to send responses back to the on-premises clients.

  • In the peer network, at least one Cloud VPN tunnel exists with appropriate routes whose destinations include the subnet where the internal TCP/UDP load balancer is defined and whose next hop is the VPN tunnel. Specifically:

    • If the peer network is another GCP VPC network, its Cloud VPN tunnel and gateway can be located in any region.

    • Clients in the peer network can connect to a Cloud VPN located in any region, provided that there is a route for the Cloud VPN tunnel on that VPN gateway to the network in which the load balancer resides. For Cloud VPN tunnels using dynamic routing, make sure that your VPC network uses global dynamic routing mode so that the custom dynamic routes learned by the Cloud Router for the tunnel are available to VMs in all regions.

The following diagram highlights key concepts when accessing an internal TCP/UDP load balancer by way of a Cloud VPN gateway and its associated tunnel. Cloud VPN securely connects your on-premises network to your GCP VPC network through an IPsec VPN connection.

Internal TCP/UDP Load Balancing and Cloud VPN (click to enlarge)
Internal TCP/UDP Load Balancing and Cloud VPN (click to enlarge)

Note the following configuration elements associated with this example:

  • In the lb-network, a Cloud VPN tunnel using dynamic routing has been configured. The VPN tunnel, gateway, and Cloud Router are all located in us-west1, the same region where the internal TCP/UDP load balancer's components are located.
  • Ingress allow firewall rules have been configured to apply to the backend VMs in both of the instance groups ig-a and ig-c so that they can receive traffic from IP addresses in the VPC and from the on-premises network, 10.1.2.0/24 and 192.168.1.0/24. No egress deny firewall rules have been created, so the implied allow egress rule applies.
  • Packets sent from clients in the on-premises networks, including from 192.168.1.0/24, to the IP address of the internal TCP/UDP load balancer, 10.1.2.99, are delivered directly to a healthy backend VM, like vm-a2, according to the configured session affinity. on the Internal TCP/UDP Load Balancing concepts page for more information.
  • Replies sent from the backend VMs (such as vm-a2) are delivered through the VPN tunnel to the on-premises clients.

Access through Cloud Interconnect

You can access an internal TCP/UDP load balancer from an on-premises peer network connected to the load balancer's VPC network when all of the following conditions are met in the load balancer's network:

  • In the TCP/UDP internal load balancer's network, both the VLAN attachment and its Cloud Router are located in the same region as the load balancer's components.

  • On-premises routers share appropriate routes that provide return paths for responses from backend VMs to the on-premises clients. VLAN attachments for both Dedicated Interconnect and Partner Interconnect use Cloud Routers to manage the routes they learn from on-premises routers.

  • Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backend VMs to send responses.

Routing and firewall rules in the on-premises peer network allow communication to and from the backend VMs in your VPC network. Clients in the peer network accessing the VLAN attachment and Cloud Router through which the peer network accesses the internal TCP/UDP load balancer don't need to be located in the same region as the load balancer itself. The same concepts for accessing the load balancer through Cloud VPN tunnels apply.

Multiple egress paths

The previous sections showed a single tunnel and a single VLAN attachment. In production environments, you'll probably be using multiple tunnels or attachments for redundancy. This section discusses the requirements and recommendations when using multiple Cloud VPN tunnels or VLAN attachments for Cloud Interconnect to connect your VPC network to a peer network.

In the following diagram, two Cloud VPN tunnels connect lb-network to an on-premises network. Though Cloud VPN tunnels are used here, the same principles apply to Cloud Interconnect.

Internal TCP/UDP Load Balancing and Multiple Cloud VPN Tunnels (click to
       enlarge)
Internal TCP/UDP Load Balancing and Multiple Cloud VPN Tunnels (click to enlarge)

If you configure each tunnel or each VLAN attachment in the same region as the internal TCP/UDP load balancer, as described in this document, you can provide multiple simultaneously accessible paths for traffic to and from your load balancer. Multiple paths can provide additional bandwidth or can serve as standby paths for redundancy.

Keep in mind the following points:

  • Cloud VPN tunnels are always associated with a specific Cloud VPN gateway. You can't create a tunnel without a gateway. Cloud Interconnect attachments (VLANs) are also associated with Cloud Interconnects. See VLAN attachments and VPC networks.

  • If the on-premises network has two routes with the same priorities, each with a destination of 10.1.2.0/24 and a next hop corresponding to a different VPN tunnel in the same region as the internal TCP/UDP load balancer, traffic can be sent from the on-premises network (192.168.1.0/24) to the load balancer using ECMP.

  • After packets are delivered to the VPC network, the internal TCP/UDP load balancer distributes them to backend VMs according to the configured session affinity.

  • For ingress traffic to the internal TCP/UDP load balancer, the VPN tunnels must be in the same region as the load balancer.

  • If the lb-network has two routes, each with the destination 192.168.1.0/24 and a next hop corresponding to different VPN tunnels, responses from backend VMs can be delivered over each tunnel according to the priority of the routes in the network. If different route priorities are used, one tunnel can serve as a backup for the other. If the same priorities are used, then responses are delivered using ECMP.

  • Replies sent from the backend VMs (such as vm-a2) are delivered directly to the on-premises clients, through the appropriate tunnel. From the perspective of lb-network, if routes or VPN tunnels change, traffic might egress using a different tunnel. This might result in TCP session resets if an in-progress connection is interrupted.

What's next

このページは役立ちましたか?評価をお願いいたします。

フィードバックを送信...