This page describes scenarios for accessing an internal TCP/UDP load balancer in your VPC network from a connected network. Before reviewing the information on this page, you should already be familiar with the Internal TCP/UDP Load Balancing Concepts.
Using VPC Network Peering
When you use VPC Network Peering to connect your VPC network to another network, Google Cloud shares subnet routes between the networks. The subnet routes allow traffic from the peer network to reach internal TCP/UDP load balancers in your network. Access is allowed if:
- Client VMs in the peer network are located in the same region as your internal TCP/UDP load balancer – unless you configure global access (Beta). With global access configured, client VM instances from any region of the peered VPC network can access your internal TCP/UDP load balancer.
- Ingress firewall rules allow traffic from client VMs in the peer network. Google Cloud firewall rules are not shared among networks when using VPC Network Peering.
You cannot selectively share only some of your internal TCP/UDP load balancers using VPC Network Peering. However, you can limit access to the load balancer's backend VM instances with firewall rules.
Using Cloud VPN and Cloud Interconnect
You can access an internal TCP/UDP load balancer from a peer network that is connected through a Cloud VPN tunnel or VLAN attachment for a Dedicated Interconnect or Partner Interconnect. The peer network can be an on-premises network, another Google Cloud VPC network, or a virtual network hosted by a different cloud provider.
Access through Cloud VPN tunnels
You can access an internal TCP/UDP load balancer through a Cloud VPN tunnel when all of the following conditions are met in the internal TCP/UDP load balancer's network:
Both the Cloud VPN gateway and tunnel are located in the same region as the internal TCP/UDP load balancer's components.
Appropriate routes provide paths for egress traffic back to the clients from which the load balancer traffic originated. Cloud VPN tunnels using dynamic routing rely on a Cloud Router to manage custom dynamic routes that serve this purpose. Cloud VPN tunnels using static routing require custom static routes, which are created automatically if you create tunnels using the Cloud Console.
Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backends to send responses back to the on-premises clients.
In the peer network, at least one Cloud VPN tunnel exists with appropriate routes whose destinations include the subnet where the internal TCP/UDP load balancer is defined and whose next hop is the VPN tunnel. Specifically:
If the peer network is another Google Cloud VPC network, its Cloud VPN tunnel and gateway can be located in any region.
Clients in the peer network can connect to a Cloud VPN located in any region, provided that there is a route for the Cloud VPN tunnel on that VPN gateway to the network in which the load balancer resides. For Cloud VPN tunnels using dynamic routing, make sure that your VPC network uses global dynamic routing mode so that the custom dynamic routes learned by the Cloud Router for the tunnel are available to VMs in all regions.
The following diagram highlights key concepts when accessing an internal TCP/UDP load balancer by way of a Cloud VPN gateway and its associated tunnel. Cloud VPN securely connects your on-premises network to your Google Cloud VPC network through an IPsec VPN connection.
Note the following configuration elements associated with this example:
- In the
lb-network, a Cloud VPN tunnel using dynamic routing has been configured. The VPN tunnel, gateway, and Cloud Router are all located in
us-west1, the same region where the internal TCP/UDP load balancer's components are located.
- Ingress allow firewall rules have been configured to apply to the backend
VMs in both of the instance groups
ig-cso that they can receive traffic from IP addresses in the VPC and from the on-premises network,
192.168.1.0/24. No egress deny firewall rules have been created, so the implied allow egress rule applies.
- Packets sent from clients in the on-premises networks, including from
192.168.1.0/24, to the IP address of the internal TCP/UDP load balancer,
10.1.2.99, are delivered directly to a healthy backend VM, like
vm-a2, according to the configured session affinity. on the Internal TCP/UDP Load Balancing concepts page for more information.
- Replies sent from the backend VMs (such as
vm-a2) are delivered through the VPN tunnel to the on-premises clients.
Access through Cloud Interconnect
You can access an internal TCP/UDP load balancer from an on-premises peer network connected to the load balancer's VPC network when all of the following conditions are met in the load balancer's network:
In the TCP/UDP internal load balancer's network, both the VLAN attachment and its Cloud Router are located in the same region as the load balancer's components.
On-premises routers share appropriate routes that provide return paths for responses from backend VMs to the on-premises clients. VLAN attachments for both Dedicated Interconnect and Partner Interconnect use Cloud Routers to manage the routes they learn from on-premises routers.
Ingress firewall rules allow traffic to the backend VMs, and egress firewall rules allow the backend VMs to send responses.
Routing and firewall rules in the on-premises peer network allow communication to and from the backend VMs in your VPC network. Clients in the peer network accessing the VLAN attachment and Cloud Router through which the peer network accesses the internal TCP/UDP load balancer don't need to be located in the same region as the load balancer itself. The same concepts for accessing the load balancer through Cloud VPN tunnels apply.
When you configure global access, the Cloud Router can be in a region different from the load balancer's region.
In the following diagram, the Cloud Router is in the
region, while the load balancer's frontend and backends are in the
region. The Cloud Router peers with the on-premises VPN Router. The BGP
peering session can be through Cloud VPN or Cloud Interconnect
with Direct Peering or Partner Interconnect.
The VPC network's dynamic routing
mode is set to
global to enable
the Cloud Router in
europe-west1 to advertise the subnet routes for
subnets in any region of the load balancer's VPC network.
Multiple egress paths
The previous sections showed a single tunnel and a single VLAN attachment. In production environments, you'll probably be using multiple tunnels or attachments for redundancy. This section discusses the requirements and recommendations when using multiple Cloud VPN tunnels or VLAN attachments for Cloud Interconnect to connect your VPC network to a peer network.
In the following diagram, two Cloud VPN tunnels connect
an on-premises network. Though Cloud VPN tunnels are used here, the same
principles apply to Cloud Interconnect.
If you configure each tunnel or each VLAN attachment in the same region as the internal TCP/UDP load balancer, as described in this document, you can provide multiple simultaneously accessible paths for traffic to and from your load balancer. Multiple paths can provide additional bandwidth or can serve as standby paths for redundancy.
Keep in mind the following points:
Cloud VPN tunnels are always associated with a specific Cloud VPN gateway. You can't create a tunnel without a gateway. Cloud Interconnect attachments (VLANs) are also associated with Cloud Interconnects. See VLAN attachments and VPC networks.
If the on-premises network has two routes with the same priorities, each with a destination of
10.1.2.0/24and a next hop corresponding to a different VPN tunnel in the same region as the internal TCP/UDP load balancer, traffic can be sent from the on-premises network (
192.168.1.0/24) to the load balancer using ECMP.
After packets are delivered to the VPC network, the internal TCP/UDP load balancer distributes them to backend VMs according to the configured session affinity.
For ingress traffic to the internal TCP/UDP load balancer, the VPN tunnels must be in the same region as the load balancer.
lb-networkhas two routes, each with the destination
192.168.1.0/24and a next hop corresponding to different VPN tunnels, responses from backend VMs can be delivered over each tunnel according to the priority of the routes in the network. If different route priorities are used, one tunnel can serve as a backup for the other. If the same priorities are used, then responses are delivered using ECMP.
Replies sent from the backend VMs (such as
vm-a2) are delivered directly to the on-premises clients, through the appropriate tunnel. From the perspective of
lb-network, if routes or VPN tunnels change, traffic might egress using a different tunnel. This might result in TCP session resets if an in-progress connection is interrupted.
- See Internal TCP/UDP Load Balancing Concepts for important fundamentals.
- See Failover concepts for Internal TCP/UDP Load Balancing for important information about failover.
- See Internal Load Balancing and DNS Names for available DNS name options your load balancer can use.
- See Setting Up Internal TCP/UDP Load Balancing for an example internal TCP/UDP load balancer configuration.
- See Configuring failover for Internal TCP/UDP Load Balancing for configuration steps and an example internal TCP/UDP load balancer failover configuration.
- See Internal TCP/UDP Load Balancing Logging and Monitoring for information on configuring Stackdriver logging and monitoring for Internal TCP/UDP Load Balancing.
- See Troubleshooting Internal TCP/UDP Load Balancing for information on how to troubleshoot issues with your internal TCP/UDP load balancer.
- See Cloud VPN Overview for information about Cloud VPN.
- See Cloud VPN Troubleshooting for troubleshooting tips for Cloud VPN.
- See Cloud Interconnect Overview for information about Cloud Interconnect.
- See Internal TCP/UDP load balancer as a next hop for information on how to use an internal TCP/UDP load balancer as the next hop for a custom static route.