Overview of load balancing for GKE on VMware

This document explains the load balancing options supported by GKE on VMware.

There are several load balancing options available to you. Choose the option that seems most suited to your environment and your needs. For example, you might choose an option that requires minimal configuration. Or you might choose an option that aligns with the load balancers you already have in your network.

These are the available options:

  • MetalLB bundled

  • F5 BIG-IP integrated (Not supported for user clusters using Controlplane V2.)

  • F5 BIG-IP set up manually

  • Citrix set up manually

  • Any load balancer that you set up manually

MetalLB

The MetalLB load balancer is bundled with GKE on VMware and is especially easy to configure. The MetalLB components run on your cluster nodes, so you don't have to create separate VMs for your load balancer.

You can configure MetalLB to perform IP address management. This means that when a developer creates a Service of type LoadBalancer, they don't have to specify a VIP for the Service. Instead, MetalLB automatically chooses a VIP from an address pool that you provide ahead of time.

For more information, see Bundled load balancing with MetalLB.

F5 BIG-IP

The F5 BIG-IP load balancer is not bundled with GKE on VMware, so you must get a license and set up the load balancer separately from installing GKE on VMware.

You can configure GKE on VMware to be integrated with F5 BIG-IP in the following sense: when a developer creates a Service of type LoadBalancer and specifies a VIP for the service, GKE on VMware will automatically configure the VIP on the load balancer. For more information, see Installing F5 BIG-IP ADC for GKE on VMware.

The F5 BIG-IP integration isn't supported for user clusters using Controlplane V2. Instead, you can use F5 BIG-IP as a manually configured load balancer. In this case, GKE on VMware does not automatically configure Service VIPs.

Citrix

We document how to set up the Citrix load balancer as an example of setting up a load balancer manually. With any load balancer that you set up manually, you must configure mappings between VIPs, node addresses, and nodePort values. For information, on how to do this for the Citrix load balancer, see Manual load balancing with Citrix.

Manual load balancing in general

You can use any load balancer of your choice as long as you set it up manually. With any load balancer that you set up manually, you must configure mappings between VIPs, node addresses, and nodePort values. For general information on how to do this, see Manual load balancing.

Setting aside virtual IP addresses

Regardless of which load balancer you use, you must set aside several virtual IP addresses (VIPs) that you intend to use for load balancing.

For your admin cluster, you must set aside these VIPs:

  • VIP for Kubernetes API server
  • VIP for add-ons

For each user cluster you intend to create, you must set aside these VIPs:

  • VIP for the Kubernetes API server
  • VIP for the ingress service

For example, suppose you intend to have two user clusters. Then you would need two VIPs for your admin cluster and two VIPs for each of your user clusters. So you would need to set aside six VIPs.

Node IP addresses

If you choose one of the following load-balancing options, then you can either use static IP addresses for your cluster nodes, or you can have your cluster nodes get their IP addresses from a DHCP server:

  • MetalLB
  • F5 BIG-IP integrated

If you choose a manual load-balancing option, then you must use static IP addresses for your cluster nodes.

If you choose to use static IP addresses, you must set aside enough addresses for the nodes in the admin cluster and the nodes in all the user clusters you intend to create. For details about how many node IP addresses to set aside, see Plan your IP addresses.

Creating Services in your cluster

After your user cluster is running, application developers might want to create Kubernetes Services and expose them to external clients.

For Services of type LoadBalancer, VIPs must be configured on the load balancer. How those VIPs get configured depends on your choice of load balancer.

MetalLB

In the user cluster configuration file, you specify address pools that the MetalLB controller uses to assign VIPs to Services. When a developer creates a Service of type LoadBalancer, the MetalLB controller chooses an address from a pool and assigns the address to the Service. The developer does not have to specify a value for loadBalancerIP in the Service manifest.

F5 BIG-IP integrated

When a developer creates a Service of type LoadBalancer, they specify an external IP address. For example:

spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.100.2

GKE on VMware automatically configures the specified address on a network interface of the F5 BIG-IP load balancer.

Manually configured load balancer

If you have chosen a manual load balancing option, developers can follow these steps to expose a Service to external clients:

  • Create a Service of type NodePort.

  • Choose a VIP for the Service.

  • Manually configure the load balancer so that traffic sent to the VIP is forwarded to the Service.