GKE On-Prem clusters can run with one of two load balancing modes: "integrated" or "manual." With integrated mode, GKE On-Prem clusters run with the F5 BIG-IP load balancer. With manual mode, you manually configure a different load balancer. For example, you could manually configure the Citrix load balancer or the Seesaw load balancer.
Manual load balancing mode requires more configuration than integrated mode. This page describes the steps you need to take for manual mode.
Using manual load balancing has the following limitations:
You cannot use DHCP to assign IP addresses to cluster nodes. You must assign static node IP addresses.
You cannot expose Services of type
LoadBalancerto clients outside the cluster. However, you can create Services of type
NodePortand manually configure your load balancer to use them as backends. You can also expose your Services by using
If you add or delete cluster nodes, you must manually configure your load balancer accordingly.
About getting support for manual load balancing
Google does not provide support for load balancers configured using manual load balancing mode. If you encounter issues with the load balancer, reach out to the load balancer's vendor.
Reserve your IP addresses
Each cluster you create has three or more VMs, which are called nodes. Reserve an IP address for each node in the clusters you intend to create. For example, if you intend to create an admin cluster with four nodes and a user cluster with three nodes, reserve seven IP addresses for nodes. Configure your routers so that all node IP addresses are routable.
You also need to reserve the following VIPs for load balancing:
- VIP for the admin control plane (port exposed: TCP 443)
- VIP for the admin cluster ingress controller (ports exposed: TCP 80, TCP 443)
- VIP for the add-on manager (port exposed: TCP 8443)
- VIP for the user control plane (port exposed: TCP 443)
- VIP for the user cluster ingress controller (ports exposed: TCP 80, TCP 443)
Reserve node ports
A Service has a
ports field, which is an array of
objects. In a Service of type
NodePort, each ServicePort object has a
nodePort, and a
targetPort. For example, here is
part of a manifest for a Service that has two ServicePort objects in its
... kind: Service ... spec: ... type: NodePort ports: - protocol: TCP port: 80 nodePort: 32676 targetPort: 8080 - protocol: TCP port: 443 nodePort: 32677 targetPort: 443 ...
Suppose the preceding Service represents the ingress controller for your user cluster. Also suppose that you have made the following choices:
203.0.113.5is the VIP for your user cluster ingress controller.
- The node addresses for your user cluster are
After you have configured your load balancer, traffic is routed as follows:
A client sends a request to
203.0.113.5on TCP port 80. The load balancer chooses a user cluster node. For this example, assume the node address is
192.168.0.11. The load balancer forwards the request to
192.168.0.11on TCP port 32676. The iptables rules on the node forward the request to an appropriate Pod on TCP port 8080.
A client sends a request to the
203.0.113.5on TCP port 443. The load balancer chooses a user cluster node. For this example, assume the node address is
192.168.0.10. The load balancer forwards the request to
192.168.0.10on TCP port 32677. The iptables rules on the node forward the request to an appropriate Pod on TCP port 443.
You do not have to create the Service objects for your VIPs. GKE On-Prem does that for you. But for each (VIP, TCP port) pair, you must choose and specify the following:
- A set of node IP addresses
You need to reserve seven
nodePortfor the admin cluster control plane, TCP port 443
nodePortfor the admin cluster ingress controller, TCP port 80
nodePortfor the admin cluster ingress controller, TCP port 443
nodePortfor the add-on manager, TCP port 8443
nodePortfor the user control plane, TCP port 80
nodePortfor for the user cluster ingress controller, TCP port 80
nodePortfor for the user cluster ingress controller, TCP port 443
Modify the GKE On-Prem configuration file
When you install GKE On-Prem, you generate a configuration file. You need to modify the following sections in your configuration file:
admincluster:ipblockfilepathto the path of the static IP YAML file for your admin cluster. This is documented in Configuring static IPs. DHCP is not an option for
usercluster: ipblockfilepathto the path of the static IP YAML file for your user cluster.
admincluster:manuallbspecfield with the
nodePortvalues you have chosen for your admin cluster.
usercluster:manuallbspecsection with the
nodePortvalues you have chosen for your user cluster.
The following example shows a portion of an updated configuration file:
lbmode: Manual admincluster: ipblockfilepath: "ipblock1.yaml" manuallbspec: ingresshttpnodeport: 32527 ingresshttpsnodeport: 30139 controlplanenodeport: 30968 addonsnodeport: 31405 usercluster: ipblockfilepath: "env/default/ipblock2.yaml" manuallbspec: ingresshttpnodeport: 30243 ingresshttpsnodeport: 30879 controlplanenodeport: 30562
Configure your load balancer
Now that you've updated the configuration file, log in to your load balancer's management console and configure your VIPs.
First, check to make sure that your admin cluster and user cluster have different IP address pools.
As previously mentioned, you need to configure five VIPs and seven ports. So you create seven virtual services on your load balancer:
- Admin cluster control plane, TCP port 443
- Admin cluster ingress controller, TCP port 80
- Admin cluster ingress controller, TCP port 443
- Add-on manager, TCP port 8443
- User control plane, TCP port 80
- User cluster ingress controller, TCP port 80
- User cluster ingress controller, TCP port 443
For more information, refer to Troubleshooting.