1.29: Preview
1.28: Not available
This document shows how to migrate the configuration settings for your F5 BIG-IP load balancer integration to manual load balancing mode for clusters at version 1.29. If your clusters are at version 1.30 or higher, we recommend that you follow the instructions in Plan cluster migration to recommended features.
Using F5 BIG-IP in manual load balancing mode gives you the flexibility to upgrade your F5 agents independently without affecting the functionality of your F5 load balancer or your Kubernetes services. If you migrate to the manual configuration, you can obtain updates directly from F5 to ensure optimal performance and security.
This migration is required in the following circumstances:
You want to enable new features like Controlplane V2 and you also need access to F5.
You need capabilities provided by a version of BIG-IP CIS Container Ingress Services (CIS) Controller higher than v1.14.
If the preceding circumstances don't apply to you, you can continue to use the bundled configuration for F5 BIG-IP load balancing.
Either way, we continue to officially support F5 as a load balancer solution.
Support for the F5 BIG-IP load balancer
We support the use of F5 BIG-IP with load balancer agents, which consist of the following two controllers:
F5 Controller (pod prefix:
load-balancer-f5
): reconcilesLoadBalancer
type Kubernetes Services into F5 Common Controller Core Library (CCCL) ConfigMap format.F5 BIG-IP CIS Controller v1.14 (pod prefix:
k8s-bigip-ctlr-deployment
): translates ConfigMaps into F5 load balancer configurations.
These agents streamline the configuration of F5 load balancers within your
Kubernetes cluster. By creating a Service of type LoadBalancer
, the
controllers automatically configure the F5 load balancer to direct traffic to
your cluster nodes.
However, the bundled solution comes with limitations:
The expressiveness of the Service API is limited. You can't configure the BIG-IP controller as you like, or use advanced F5 features. F5 already provides better support of Service API natively.
The implementation uses the legacy CCCL ConfigMap API and 1.x CIS. However, F5 now provides the newer AS3 ConfigMap API and 2.x CIS.
The CIS controller in the Google Distributed Cloud bundle has remained at v1.14 due to compatibility issues with the F5 upgrade guidance for CIS v2.x. Therefore, to give you the flexibility to address security vulnerabilities and access the latest features, we are transitioning the F5 agents from being bundled components to being independently installed. If you migrate, you can continue using the existing agents without disruption and your previously created services remain operational.
For newly created manual load balancing clusters with F5 as the load balancing solution, you need to install the controllers yourself. Similarly, if your cluster has been migrated from bundled F5 and you'd like to use a newer version of the CIS Controller, you need to install the controllers yourself.
Requirements
Following are the requirements for the migration:
The admin cluster and all user clusters must be version 1.29 or higher.
You must be using static IP addresses for your admin and user cluster nodes. The IP addressing type is set in the
network.ipMode.type
field, and it is immutable. If this field is set to DHCP, you can't migrate the clusters.
Update the user cluster configuration file
Make the following changes to the user cluster configuration file:
Change
loadBalancer.kind
to"ManualLB"
.Keep the same values for the
loadBalancer.vips.controlPlaneVIP
and theloadBalancer.vips.ingressVIP
fields.Configure the
nodePort
used for HTTP traffic sent to the ingress VIP.Get the current HTTP
nodePort
value:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ get svc istio-ingress -n gke-system -oyaml | grep http2 -A 1
Replace
USER_CLUSTER_KUBECONFIG
with the path of the user cluster kubeconfig file.Add the value from the previous command to the
loadBalancer.manualLB.ingressHTTPNodePort
field, for example:loadBalancer: manualLB: ingressHTTPNodePort: 30243
Configure the
nodePort
used for HTTPS traffic sent to the ingress VIP:Get the current HTTPS
nodePort
value:kubectl --kubeconfig USER_CLUSTER_KUBECONFIG \ get svc istio-ingress -n gke-system -oyaml | grep https -A 1
Add the value from the previous command to the
loadBalancer.manualLB.ingressHTTPSNodePort
field, for example:loadBalancer: manualLB: ingressHTTPSNodePort: 30879
Configure the
nodePort
for the Kubernetes API server:Get the current
nodePort
value for the Kubernetes API server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n USER_CLUSTER_NAME -oyaml | grep kube-apiserver-port -A 1
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
with the path of the admin cluster kubeconfig file.USER_CLUSTER_NAME
: name of the user cluster.
Add the value from the previous command to the
loadBalancer.manualLB.controlPlaneNodePort
field, for example:loadBalancer: manualLB: controlPlaneNodePort: 30968
Configure the
nodePort
for the Konnectivity server:Get the current
nodePort
value for the Konnectivity server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n USER_CLUSTER_NAME -oyaml | grep konnectivity-server-port -A 1
Add the value from the previous command to the
loadBalancer.manualLB.konnectivityServerNodePort
field, for example:loadBalancer: manualLB: konnectivityServerNodePort: 30563
Delete the entire
loadBalancer.f5BigIP
section.
Update the user cluster
Run the following command to migrate the cluster:
gkectl update cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config USER_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig file.USER_CLUSTER_CONFIG
: the path of the user cluster configuration file.
Update the admin cluster configuration file
Make the following changes to the admin cluster configuration file:
Change
loadBalancer.kind
to"ManualLB"
.Keep the same value for the
loadBalancer.vips.controlPlaneVIP
field.Check the value of the
adminMaster.replicas
field. If the value is 3, the admin cluster is highly available (HA). If the value is 1, the admin cluster is non-HA.Do the following steps only for non-HA admin clusters:
Get the value of the
nodePort
for the Kubernetes API server:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get svc kube-apiserver -n kube-system -oyaml | grep nodePort
Replace
ADMIN_CLUSTER_KUBECONFIG
with the path of the admin cluster kubeconfig file.Add the value from the previous command to the
loadBalancer.manualLB.controlPlaneNodePort
field, for example:loadBalancer: manualLB: controlPlaneNodePort: 30968
Run the following command to see if there's an add-ons
nodePort
:kubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ get deploy monitoring-operator -n kube-system -oyaml | grep admin-ingress-nodeport
If the previous command outputs a value, add it to the
loadBalancer.manualLB.addonsNodePort
field, for example:loadBalancer: manualLB: addonsNodePort: 31405
Delete the entire
loadBalancer.f5BigIP
section.
Update the admin cluster
Run the following command to update the cluster:
gkectl update admin \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --config ADMIN_CLUSTER_CONFIG
Replace the following:
ADMIN_CLUSTER_KUBECONFIG
: the path of the admin cluster kubeconfig file.ADMIN_CLUSTER_CONFIG
: the path of the admin cluster configuration file.
Verify that legacy F5 resources still exist
After updating your clusters to use manual load balancing, traffic to your clusters isn't interrupted because the existing F5 resources still exist, as you can see by running the following command:
kubectl --kubeconfig CLUSTER_KUBECONFIG \ api-resources --verbs=list -o name | xargs -n 1 kubectl --kubeconfig CLUSTER_KUBECONFIG get --show-kind --ignore-not-found --selector=onprem.cluster.gke.io/legacy-f5-resource=true -A
Replace CLUSTER_KUBECONFIG
with either the path of the
admin cluster or user cluster kubeconfig file.
The expected output is similar to the following:
Admin cluster:
Warning: v1 ComponentStatus is deprecated in v1.19+ NAMESPACE NAME TYPE DATA AGE kube-system secret/bigip-login-xt697x Opaque 4 13h NAMESPACE NAME SECRETS AGE kube-system serviceaccount/bigip-ctlr 0 13h kube-system serviceaccount/load-balancer-f5 0 13h NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/k8s-bigip-ctlr-deployment 1/1 1 1 13h kube-system deployment.apps/load-balancer-f5 1/1 1 1 13h NAME ROLE AGE clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding ClusterRole/bigip-ctlr-clusterrole 13h clusterrolebinding.rbac.authorization.k8s.io/load-balancer-f5-clusterrole-binding ClusterRole/load-balancer-f5-clusterrole 13h NAME CREATED AT clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole 2024-03-25T04:37:34Z clusterrole.rbac.authorization.k8s.io/load-balancer-f5-clusterrole 2024-03-25T04:37:34Z
User cluster:
Warning: v1 ComponentStatus is deprecated in v1.19+ NAMESPACE NAME TYPE DATA AGE kube-system secret/bigip-login-sspwrd Opaque 4 14h NAMESPACE NAME SECRETS AGE kube-system serviceaccount/bigip-ctlr 0 14h kube-system serviceaccount/load-balancer-f5 0 14h NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/k8s-bigip-ctlr-deployment 1/1 1 1 14h kube-system deployment.apps/load-balancer-f5 1/1 1 1 14h NAME ROLE AGE clusterrolebinding.rbac.authorization.k8s.io/bigip-ctlr-clusterrole-binding ClusterRole/bigip-ctlr-clusterrole 14h clusterrolebinding.rbac.authorization.k8s.io/load-balancer-f5-clusterrole-binding ClusterRole/load-balancer-f5-clusterrole 14h NAME CREATED AT clusterrole.rbac.authorization.k8s.io/bigip-ctlr-clusterrole 2024-03-25T05:16:40Z clusterrole.rbac.authorization.k8s.io/load-balancer-f5-clusterrole 2024-03-25T05:16:41Z
Check your load balancer
After the migration, you shouldn't need to change any settings in your load
balancer because you kept the same VIPs and nodePort
values. The following
tables describe the mappings from VIPs to node IP addresses:nodePort
.
HA admin cluster
Traffic to control plane nodes
Google Distributed Cloud automatically handles load balancing of the control plane
traffic for HA admin clusters. Although you don't need to configure a mapping
in the load balancer, you must specify an IP address in the
loadBalancer.vips.controlPlaneVIP
field.
Traffic to services in the add-on nodes
If your admin cluster had a value for addonsNodePort
, you should see a
mapping to the IP addresses and nodePort
value for traffic to services in
add-on nodes:
- (
addonsVIP
:8443) -> (NODE_IP_ADDRESSES:addonsNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane nodes and the add-on nodes.
Non-HA admin cluster
Control plane traffic
The following shows the mapping to the IP address and nodePort
value for
the control plane node:
- (
controlPlaneVIP
:443) -> (NODE_IP_ADDRESSES:controlPlaneNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane node and the add-on nodes.
Traffic to services in the add-on nodes
If your admin cluster had a value for addonsNodePort
, you should have the
following mapping to the IP addresses and nodePort
values for services
running in add-on nodes:
- (
addonsVIP
:8443) -> (NODE_IP_ADDRESSES:addonsNodePort
)
You should have this mapping for all nodes in the admin cluster, both the control plane node and the add-on nodes.
User cluster
Control plane traffic
The following shows the mapping to the IP addresses and nodePort
values for
control plane traffic:
- (
controlPlaneVIP
:443
) -> (NODE_IP_ADDRESSES:controlPlaneNodePort
) - (
controlPlaneVIP
:8132
) -> (NODE_IP_ADDRESSES:konnectivityServerNodePort
)
You should have this mapping for all nodes in the admin cluster, both the admin cluster and the user cluster control plane nodes.
Data plane traffic
The following shows the mapping to the IP addresses and nodePort
values for
data plane traffic:
- (
ingressVIP
:80
) -> (NODE_IP_ADDRESSES:ingressHTTPNodePort
) - (
ingressVIP
:443
) -> (NODE_IP_ADDRESSES:ingressHTTPSNodePort
)