This page describes scenarios for accessing an internal TCP/UDP load balancer in your Virtual Private Cloud (VPC) network from a connected network. Before reviewing the information on this page, you should already be familiar with the concepts in the Internal TCP/UDP Load Balancing overview.
Using VPC Network Peering
When you use VPC Network Peering to connect your VPC network to another network, Google Cloud shares subnet routes between the networks. The subnet routes allow traffic from the peer network to reach internal TCP/UDP load balancers in your network. Access is allowed if the following is true:
- Client virtual machine (VM) instances in the peer network are located in the same region as your internal TCP/UDP load balancer—unless you configure global access (beta). With global access configured, client VM instances from any region of the peered VPC network can access your internal TCP/UDP load balancer.
- Ingress firewall rules allow traffic from client VMs in the peer network. Google Cloud firewall rules are not shared among networks when using VPC Network Peering.
You cannot selectively share only some of your internal TCP/UDP load balancers by using VPC Network Peering. However, you can limit access to the load balancer's backend VM instances by using firewall rules.
Using Cloud VPN and Cloud Interconnect
You can access an internal TCP/UDP load balancer from a peer network that is connected through a Cloud VPN tunnel or interconnect attachment (VLAN) for a Dedicated Interconnect connection or Partner Interconnect connection. The peer network can be an on-premises network, another Google Cloud VPC network, or a virtual network hosted by a different cloud provider.
Access through Cloud VPN tunnels
You can access an internal TCP/UDP load balancer through a Cloud VPN tunnel when all of the following conditions are met in the internal TCP/UDP load balancer's network:
Both the Cloud VPN gateway and tunnel are located in the same region as the internal TCP/UDP load balancer's components.
Appropriate routes provide paths for egress traffic back to the clients from which the load balancer traffic originated. Cloud VPN tunnels that use dynamic routing rely on a Cloud Router to manage custom dynamic routes that serve this purpose. Cloud VPN tunnels that use static routing require custom static routes, which are created automatically if you create tunnels by using the Google Cloud Console.
Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backends to send responses back to the on-premises clients.
In the peer network, at least one Cloud VPN tunnel exists with appropriate routes whose destinations include the subnet where the internal TCP/UDP load balancer is defined and whose next hop is the VPN tunnel. Specifically:
If the peer network is another Google Cloud VPC network, its Cloud VPN tunnel and gateway can be located in any region.
Clients in the peer network can connect to a Cloud VPN located in any region, provided that there is a route for the Cloud VPN tunnel on that VPN gateway to the network in which the load balancer resides. For Cloud VPN tunnels that use dynamic routing, make sure that your VPC network uses global dynamic routing mode so that the custom dynamic routes learned by the Cloud Router for the tunnel are available to VMs in all regions.
The following diagram highlights key concepts when accessing an internal TCP/UDP load balancer by way of a Cloud VPN gateway and its associated tunnel. Cloud VPN securely connects your on-premises network to your Google Cloud VPC network through an IPsec VPN connection.
Note the following configuration elements associated with this example:
- In the
lb-network, a Cloud VPN tunnel that uses dynamic routing has been configured. The VPN tunnel, gateway, and Cloud Router are all located in
us-west1, the same region where the internal TCP/UDP load balancer's components are located.
- Ingress allow firewall rules have been configured to apply to the backend
VMs in the instance groups
ig-cso that they can receive traffic from IP addresses in the VPC network and from the on-premises network,
192.168.1.0/24. No egress deny firewall rules have been created, so the implied allow egress rule applies.
- Packets sent from clients in the on-premises networks, including from
192.168.1.0/24, to the IP address of the internal TCP/UDP load balancer,
10.1.2.99, are delivered directly to a healthy backend VM, such as
vm-a2, according to the configured session affinity.
- Replies sent from the backend VMs (such as
vm-a2) are delivered through the VPN tunnel to the on-premises clients.
To troubleshoot Cloud VPN, see Cloud VPN troubleshooting.
Access through Cloud Interconnect
You can access an internal TCP/UDP load balancer from an on-premises peer network that is connected to the load balancer's VPC network when all the following conditions are met in the internal TCP/UDP load balancer's network:
Both the interconnect attachment (VLAN) and its Cloud Router are located in the same region as the load balancer's components.
On-premises routers share appropriate routes that provide return paths for responses from backend VMs to the on-premises clients. Interconnect attachments (VLANs) for both Dedicated Interconnect and Partner Interconnect use Cloud Routers to manage the routes they learn from on-premises routers.
Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backend VMs to send responses.
Routing and firewall rules in the on-premises peer network allow communication to and from the backend VMs in your VPC network. Clients in the peer network that are accessing the interconnect attachment (VLAN) and Cloud Router through which the peer network accesses the internal TCP/UDP load balancer don't need to be located in the same region as the load balancer. The same concepts for accessing the load balancer through Cloud VPN tunnels apply.
When you configure global access, the Cloud Router can be in a region different from the load balancer's region.
In the following diagram, the Cloud Router is in the
region, while the load balancer's frontend and backends are in the
region. The Cloud Router peers with the on-premises VPN router. The
Border Gateway Protocol (BGP) peering session can be through Cloud VPN
or Cloud Interconnect with Direct Peering or Partner Interconnect.
The VPC network's
dynamic routing mode is set to
global to enable the Cloud Router in
europe-west1 to advertise the
subnet routes for subnets in any region of the load balancer's
Multiple egress paths
The previous sections showed a single tunnel and a single interconnect attachment (VLAN). In production environments, you'll probably be using multiple tunnels or attachments for redundancy. This section discusses the requirements and recommendations for connecting your VPC network to a peer network when using multiple Cloud VPN tunnels or interconnect attachments (VLANs) for Cloud Interconnect.
In the following diagram, two Cloud VPN tunnels connect
an on-premises network. Although Cloud VPN tunnels are used here, the
same principles apply to Cloud Interconnect.
If you configure each tunnel or each interconnect attachment (VLAN) in the same region as the internal TCP/UDP load balancer (as described in this article), you can provide multiple simultaneously accessible paths for traffic to and from your load balancer. Multiple paths can provide additional bandwidth or can serve as standby paths for redundancy.
Keep in mind the following points:
- Cloud VPN tunnels are always associated with a specific Cloud VPN gateway. You can't create a tunnel without a gateway. Interconnect attachments (VLANs) are also associated with Cloud Interconnect connections. For more information, see interconnect attachments (VLANs) and VPC networks.
- If the on-premises network has two routes with the same priorities, each
with a destination of
10.1.2.0/24and a next hop corresponding to a different VPN tunnel in the same region as the internal TCP/UDP load balancer, traffic can be sent from the on-premises network (
192.168.1.0/24) to the load balancer by using equal-cost multipath (ECMP).
- After packets are delivered to the VPC network, the internal TCP/UDP load balancer distributes them to backend VMs according to the configured session affinity.
- For ingress traffic to the internal TCP/UDP load balancer, the VPN tunnels must be in the same region as the load balancer.
- If the
lb-networkhas two routes, each with the destination
192.168.1.0/24and a next hop corresponding to different VPN tunnels, responses from backend VMs can be delivered over each tunnel according to the priority of the routes in the network. If different route priorities are used, one tunnel can serve as a backup for the other. If the same priorities are used, responses are delivered by using ECMP.
- Replies sent from the backend VMs (such as
vm-a2) are delivered directly to the on-premises clients through the appropriate tunnel. From the perspective of
lb-network, if routes or VPN tunnels change, traffic might egress by using a different tunnel. This might result in TCP session resets if an in-progress connection is interrupted.