This document shows how to migrate the configuration settings for your F5 BIG-IP integrated load balancer to manual load balancing mode.
Support for the F5 BIG-IP load balancer
In the past, you could configure Google Distributed Cloud to be integrated with
F5 BIG-IP in the following sense: when a developer creates a Service of type
LoadBalancer
and specifies a virtual IP address (VIP) for the service,
Google Distributed Cloud automatically configures the VIP on the load balancer.
To enable new and advanced features, such as Controlplane V2, we recommend that you update the configuration. You can continue to use your F5 BIG-IP load balancer, but you need to change the settings in your cluster configuration files to use manual load balancing.
After migration, your F5 workloads will remain, but Google Distributed Cloud will no longer manage them. In a future release, F5 BIG-IP will only be available for manual management. This means you'll be responsible for managing your F5 BIG-IP workloads directly.
Requirements
Following are the requirements for the migration:
The admin cluster and all user clusters must be version 1.29 or higher.
You must be using static IP addresses for your admin and user cluster nodes. The IP addressing type is set in the
network.ipMode.type
field, and it is immutable. If this field is set to DHCP, you can't migrate the clusters.
Update the user cluster configuration file
Make the following changes to the user cluster configuration file:
Change
loadBalancer.kind
to"ManualLB"
.Keep the same values for the
loadBalancer.vips.controlPlaneVIP
and theloadBalancer.vips.ingressVIP
fields.Configure the
nodePort
used for HTTP traffic sent to the ingress VIP.Get the current HTTP
nodePort
value:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ get svc istio-ingress -n gke-system -oyaml | grep http2 -A 1
Replace
USER_CLUSTER_KUBECONFIG
with the path of the user cluster kubeconfig file.Add the value from the previous command to the
loadBalancer.manualLB.ingressHTTPNodePort
field, for example:loadBalancer: manualLB: ingressHTTPNodePort: 30243
Configure the
nodePort
used for HTTPS traffic sent to the ingress VIP:Get the current HTTPS
nodePort
value:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ get svc istio-ingress -n gke-system -oyaml | grep https -A 1
Add the value from the previous command to the
loadBalancer.manualLB.ingressHTTPSNodePort
field, for example:loadBalancer: manualLB: ingressHTTPSNodePort: 30879
Configure the
nodePort
for the Kubernetes API server:Get the current
nodePort
value for the Kubernetes API server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n USER_CLUSTER_NAME -oyaml | grep kube-apiserver-port -A 1
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
with the path of the admin cluster kubeconfig file.USER_CLUSTER_NAME
: name of the user cluster.
Add the value from the previous command to the
loadBalancer.manualLB.controlPlaneNodePort
field, for example:loadBalancer: manualLB: controlPlaneNodePort: 30968
Configure the
nodePort
for the Konnectivity server:Get the current
nodePort
value for the Konnectivity server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n USER_CLUSTER_NAME -oyaml | grep konnectivity-server-port -A 1
Add the value from the previous command to the
loadBalancer.manualLB.konnectivityServerNodePort
field, for example:loadBalancer: manualLB: konnectivityServerNodePort: 30563
Delete the entire
loadBalancer.f5BigIP
section.Run
gkectl diagnose cluster
, and fix any issues that the command finds.gkectl diagnose cluster \ --kubeconfig=ADMIN_CLUSTER_KUBECONFIG \ --cluster-name=USER_CLUSTER_NAME
Update the user cluster:
Run the following command to migrate the cluster:
gkectl update cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig file.USER_CLUSTER_CONFIG
: the path of the user cluster configuration file.
Update the admin cluster configuration file
Make the following changes to the admin cluster configuration file:
Change
loadBalancer.kind
to"ManualLB"
.Keep the same value for the
loadBalancer.vips.controlPlaneVIP
field.Check the value of the
adminMaster.replicas
field. If the value is 3, the admin cluster is highly available (HA). If the value is 1, the admin cluster is non-HA.Do the following steps only for non-HA admin clusters:
Get the value of the
nodePort
for the Kubernetes API server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n kube-system -oyaml | grep nodePort
Replace
ADMIN_CLUSTER_KUBECONFIG
with the path of the admin cluster kubeconfig file.Add the value from the previous command to the
loadBalancer.manualLB.controlPlaneNodePort
field, for example:loadBalancer: manualLB: controlPlaneNodePort: 30968
Run the following command to see if there's an add-ons
nodePort
:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get deploy monitoring-operator -n kube-system -oyaml | grep admin-ingress-nodeport
If the previous command outputs a value, add it to the
loadBalancer.manualLB.addonsNodePort
field, for example:loadBalancer: manualLB: addonsNodePort: 31405
Delete the entire
loadBalancer.f5BigIP
section.Run
gkectl diagnose cluster
, and fix any issues that the command finds.gkectl diagnose cluster \ --kubeconfig=ADMIN_CLUSTER_KUBECONFIG
Update the admin cluster:
Run the following command to update the cluster:
gkectl update cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config ADMIN_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig file.ADMIN_CLUSTER_CONFIG
: the path of the admin cluster configuration file.
Verify that legacy F5 resources still exist
After updating your clusters to use manual load balancing, traffic to your clusters isn't interrupted because the existing F5 resources still exist, as you can see by running the following command:
kubectl --kubeconfig CLUSTER_KUBECONFIG \ api-resources --verbs=list -o name | xargs -n 1 kubectl --kubeconfig CLUSTER_KUBECONFIG get --show-kind --ignore-not-found --selector=onprem.cluster.gke.io/legacy-f5-resource=true -A
Replace CLUSTER_KUBECONFIG
with either the path of the
admin cluster or user cluster kubeconfig file.
The expected output is similar to the following:
Admin cluster:
Warning: v1 ComponentStatus is deprecated in v1.19+ NAMESPACE NAME TYPE DATA AGE kube-system secret/bigip-login-xt697x Opaque 4 13h NAMESPACE NAME SECRETS AGE kube-system serviceaccount/bigip-ctlr 0 13h kube-system serviceaccount/load-balancer-f5 0 13h NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/k8s-bigip-ctlr-deployment 1/1 1 1 13h kube-system deployment.apps/load-balancer-f5 1/1 1 1 13h NAME ROLE AGE clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding ClusterRole/bigip-ctlr-clusterrole 13h clusterrolebinding.rbac.authorization.k8s.io/load-balancer-f5-clusterrole-binding ClusterRole/load-balancer-f5-clusterrole 13h NAME CREATED AT clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole 2024-03-25T04:37:34Z clusterrole.rbac.authorization.k8s.io/load-balancer-f5-clusterrole 2024-03-25T04:37:34Z
User cluster:
Warning: v1 ComponentStatus is deprecated in v1.19+ NAMESPACE NAME TYPE DATA AGE kube-system secret/bigip-login-sspwrd Opaque 4 14h NAMESPACE NAME SECRETS AGE kube-system serviceaccount/bigip-ctlr 0 14h kube-system serviceaccount/load-balancer-f5 0 14h NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/k8s-bigip-ctlr-deployment 1/1 1 1 14h kube-system deployment.apps/load-balancer-f5 1/1 1 1 14h NAME ROLE AGE clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding ClusterRole/bigip-ctlr-clusterrole 14h clusterrolebinding.rbac.authorization.k8s.io/load-balancer-f5-clusterrole-binding ClusterRole/load-balancer-f5-clusterrole 14h NAME CREATED AT clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole 2024-03-25T05:16:40Z clusterrole.rbac.authorization.k8s.io/load-balancer-f5-clusterrole 2024-03-25T05:16:41Z
Check your load balancer
After the migration, you shouldn't need to change any settings in your load
balancer because you kept the same VIPs and nodePort
values. The following
tables describe the mappings from VIPs to node IP addresses:nodePort
.
HA admin cluster
Traffic to control plane nodes
Google Distributed Cloud automatically handles load balancing of the control plane
traffic for HA admin clusters. Although you don't need to configure a mapping
in the load balancer, you must specify an IP address in the
loadBalancer.vips.controlPlaneVIP
field.
Traffic to services in the add-on nodes
If your admin cluster had a value for addonsNodePort
, you should see a
mapping to the IP addresses and nodePort
value for traffic to services in
add-on nodes:
- (
addonsVIP
:8443) -> (NODE_IP_ADDRESSES:addonsNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane nodes and the add-on nodes.
Non-HA admin cluster
Control plane traffic
The following shows the mapping to the IP address and nodePort
value for
the control plane node:
- (
controlPlaneVIP
:443) -> (NODE_IP_ADDRESSES:controlPlaneNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane node and the add-on nodes.
Traffic to services in the add-on nodes
If your admin cluster had a value for addonsNodePort
, you should have the
following mapping to the IP addresses and nodePort
values for services
running in add-on nodes:
- (
addonsVIP
:8443) -> (NODE_IP_ADDRESSES:addonsNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane node and the add-on nodes.
User cluster
Control plane traffic
The following shows the mapping to the IP addresses and nodePort
values for
control plane traffic:
- (
controlPlaneVIP
:443
) -> (NODE_IP_ADDRESSES:controlPlaneNodePort
) - (
controlPlaneVIP
:8132
) -> (NODE_IP_ADDRESSES:konnectivityServerNodePort
)
You should have this mapping for all nodes in the admin cluster, both the admin cluster and the user cluster control plane nodes.
Data plane traffic
The following shows the mapping to the IP addresses and nodePort
values for
data plane traffic:
- (
ingressVIP
:80
) -> (NODE_IP_ADDRESSES:ingressHTTPNodePort
) - (
ingressVIP
:443
) -> (NODE_IP_ADDRESSES:ingressHTTPSNodePort
)