Migrate a user cluster to Controlplane V2

This document shows how to migrate a version 1.29 or higher user cluster using kubeception to Controlplane V2.

1.29: Preview
1.28: Not available
1.16: Not available

About user cluster control planes

Prior to Google Distributed Cloud version 1.13, the control plane for a user cluster ran on one or more nodes in an admin cluster. This kind of control plane is referred to as kubeception. In version 1.13, Controlplane V2 was introduced for new user clusters. When Controlplane V2 is enabled, the control plane for the user cluster runs in the user cluster itself.

The benefits of Controlplane V2 include the following:

  • Failure isolation. An admin cluster failure does not affect user clusters.

  • Operational separation. An admin cluster upgrade does not cause downtime for user clusters.

  • Deployment separation. You can put the admin and user clusters in different failure domains or geographical sites. For example, a user cluster in an edge location could be in a different geographical site from the admin cluster.

Requirements

To migrate a user cluster to Controlplane V2, the user cluster must meet the following requirements:

  • The user cluster must be version 1.29 or higher. The admin cluster and node pools can be one or two minor versions lower than the user cluster. If needed, upgrade the cluster.

  • The user cluster must have Dataplane V2 enabled. This field is immutable, so if Dataplane V2 isn't enable on the cluster, you can't migrate it to Controlplane V2.

  • The user cluster must be configured to use either the MetalLB or a manual load balancer. If the user cluster is using the SeeSaw load balancer, you can migrate it to MetalLB.

  • Review the IP addresses planning document, and ensure that you have enough IP addresses available for the user cluster's control plane nodes. The control plane nodes require static IP addresses, and you will need an additional IP address for a new control plane virtual IP (VIP).

Update the user cluster configuration file

Make the following changes to the existing user cluster configuration file:

  1. Set enableControlplaneV2 to true.

  2. Optionally, make the control plane for the Controlplane V2 user cluster highly available (HA). To change from a non-HA to to an HA cluster, change masterNode.replicas from 1 to 3.

  3. Add the static IP address (or addresses) for the user cluster control plane node(s) to the network.controlPlaneIPBlock.ips section. Remove the IP address (or addresses) for the kubeception user cluster control plane node(s) from the admin cluster IP block file.

  4. Fill in the netmask and gateway in the network.controlPlaneIPBlock section.

  5. If the network.hostConfig section is empty, fill it in.

  6. If the user cluster uses manual load balancing, configure your load balancer to include the control plane node IPs for data plane traffic:

    • (ingressVIP:80) -> (CP_NODE_IP_ADDRESSES:ingressHTTPNodePort)
    • (ingressVIP:443) -> (CP_NODE_IP_ADDRESSES:ingressHTTPSNodePort)
  7. Update the loadBalancer.vips.controlPlaneVIP field with the new IP address for the control plane VIP.

  8. All of the previous fields are immutable except when updating the cluster for the migration. Be sure to double check all settings.

  9. Run gkectl diagnose cluster, and fix any issues that the command finds.

    gkectl diagnose cluster --kubeconfig=ADMIN_CLUSTER_KUBECONFIG \
          --cluster-name=USER_CLUSTER_NAME

    Replace the following:

    • ADMIN_CLUSTER_KUBECONFIG: the path of the admin cluster kubeconfig file.

    • CLUSTER_NAME: the name of the user cluster.

Adjust manual load balancer configuration

If your user cluster uses manual load balancing, do the step in this section. Otherwise skip this section.

Similarly to configure your load balancer for a CPv2 user cluster, for each of the three new control-plane node IP addresses that you specified in the network.controlPlaneIPBlock section, configure the mappings in your load balancer:

  • (ingressVIP:80) -> (NEW_NODE_IP_ADDRESS:ingressHTTPNodePort)
  • (ingressVIP:443) -> (NEW_NODE_IP_ADDRESS:ingressHTTPNodePort)

Update the cluster

Run the following command to migrate the cluster to Controlplane V2:

gkectl update cluster \
    --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
    --config USER_CLUSTER_CONFIG

Replace the following:

  • ADMIN_CLUSTER_KUBECONFIG: the path of the admin cluster kubeconfig file.

  • USER_CLUSTER_CONFIG: the path of the user cluster configuration file.