You are viewing documentation for a previous version of Anthos GKE on-prem. View the latest documentation.

Enabling manual load balancing mode

GKE on-prem clusters can run with one of two load balancing modes: integrated or manual. With integrated mode, GKE on-prem clusters run with the F5 BIG-IP load balancer. With manual mode, you can use the F5 BIG-IP load balancer or any other load balancer of your choice. Manual load balancing mode requires that you do more configuration than with integrated mode. This page describes the steps you need to take if you choose to use the manual load balancing mode.

The Citrix load balancer and the Seesaw load balancer are examples of load balancers that you could use with manual load balancing mode.

For instructions on how to use F5 BIG-IP with manual load balancing mode, see Installing F5 BIG-IP ADC for GKE on-prem using manual load balancing.

In this topic, you set aside IP addresses and nodePort values for later use. The idea is that you choose the IP addresses and nodePort values that you want to use for load balancing and for your cluster nodes. But you don't do anything with the addresses and nodePort values at this point. Later, when you are ready to install GKE on-prem, you will need the addresses and nodePort values to fill in your cluster configuration file. You will also need the addresses and nodePort values when you manually configure your load balancer.

Setting aside virtual IP addresses

Regardless of whether you use integrated or manual load balancing mode, you must set aside several virtual IP addresses (VIPs) that you intend to use for load balancing. These VIPs allow external clients to reach your Kubernetes API servers, your ingress services, and your addon services. For detailed instructions on setting aside VIPs, see Setting aside virtual IP addresses.

Setting aside node IP addresses

With manual load balancing mode, you cannot use DHCP. You must specify static IP addresses for your cluster nodes. You need to set aside enough addresses for the nodes in the admin cluster and the nodes in all the user clusters you intend to create. For details about how many node IP addresses to set aside, see Configuring static IPs.

Setting aside nodePort values

In a GKE on-prem cluster, the Kubernetes API server, the ingress service, and the addon service are implemented as Kubernetes Services of type NodePort. With manual load balancing mode, you must choose your own nodePort values for these Services. Choose values in the 30000 - 32767 range. After you choose your nodePort values, set them aside for later when you modify your cluster configuration file.

Choose and set aside the following nodePort values.

  • For each VIP that you have set aside for a Kubernetes API server, set aside one nodePort value.

  • For each VIP that you have set aside for a cluster ingress service, set aside two nodePort values: one for HTTP traffic and one for HTTPS traffic. This only applies to user cluster.

  • For each VIP that you have set aside for a cluster addon service, set aside one nodePort value. This only applies to admin cluster.

For example, suppose you intend to have two user clusters, and you intend to use addons in all of your clusters. You would need to choose and set aside the following nodePort values:

  • A nodePort value for the Kubernetes API server in the admin cluster.

  • For each of two user clusters, a nodePort value for the Kubernetes API server.

  • For each of two user clusters, a nodePort value for HTTP traffic to the ingress service.

  • For each of two user clusters, a nodePort value for HTTPS traffic to the ingress service.

  • A nodePort value for the addon service in the admin cluster.

  • For each of two user clusters, a nodePort value for the addon server.

So in the preceding example, you would need to set aside 11 nodePort values.

Modifying the GKE on-prem configuration file

When you install GKE on-prem, you generate a configuration file. For manual load balancing mode, make the following modifications to your configuration file:

  • Set lbmode to Manual.

  • Set admincluster.ipblockfilepath to the path of the static IP YAML file for your admin cluster. This is documented in Configuring static IPs. DHCP is not an option for Manual mode.

  • Set usercluster.ipblockfilepath to the path of the static IP YAML file for your user cluster.

  • Update the admincluster.manuallbspec field with the nodePort values you have chosen for your admin cluster.

  • Update the usercluster.manuallbspec section with the nodePort values you have chosen for your user cluster.

The following example shows a portion of an updated configuration file:

...
lbmode: Manual
...
admincluster:
  ipblockfilepath: "ipblock1.yaml"
  manuallbspec:
    controlplanenodeport: 30968
    addonsnodeport: 31405
...
usercluster:
  ipblockfilepath: "env/default/ipblock2.yaml"
  manuallbspec:
    ingresshttpnodeport: 30243
    ingresshttpsnodeport: 30879
    controlplanenodeport: 30562

Configure your load balancer

Now that you've updated the configuration file, log in to your load balancer's management console and configure your VIPs.

As previously mentioned, you need to configure five VIPs and seven ports. So you create seven virtual services on your load balancer:

  • Admin cluster control plane, TCP port 443
  • Add-on manager, TCP port 8443
  • User control plane, TCP port 80
  • User cluster ingress controller, TCP port 80
  • User cluster ingress controller, TCP port 443

Load balancing example

A Service has a ports field, which is an array of ServicePort objects. In a Service of type NodePort, each ServicePort object has a protocol, a port, a nodePort, and a targetPort. For example, here is part of a manifest for a Service that has two ServicePort objects in its ports array:

...
kind: Service
...
spec:
  ...
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    nodePort: 32676
    targetPort: 8080
  - protocol: TCP
    port: 443
    nodePort: 32677
    targetPort: 443
...

Suppose the preceding Service represents the ingress service for one of your user clusters. Also suppose that you have made the following choices:

  • 203.0.113.5 is the VIP for the ingress service of your user cluster.

  • The node addresses for your user cluster are 192.168.0.10, 192.168.0.11, and 192.168.0.12.

After you have configured your load balancer, traffic is routed as follows:

  • A client sends a request to 203.0.113.5 on TCP port 80. The load balancer chooses a user cluster node. For this example, assume the node address is 192.168.0.11. The load balancer forwards the request to 192.168.0.11 on TCP port 32676. The iptables rules on the node forward the request to an appropriate Pod on TCP port 8080.

  • A client sends a request to the 203.0.113.5 on TCP port 443. The load balancer chooses a user cluster node. For this example, assume the node address is 192.168.0.10. The load balancer forwards the request to 192.168.0.10 on TCP port 32677. The iptables rules on the node forward the request to an appropriate Pod on TCP port 443.

Getting support for manual load balancing

Google does not provide support for load balancers configured using manual load balancing mode. If you encounter issues with the load balancer, reach out to the load balancer's vendor.

What's next

Troubleshooting

For more information, refer to Troubleshooting.