Deleting a user cluster

This page describes how to delete an GKE on VMware user cluster.

Overview

GKE on VMware supports deletion of user clusters via gkectl. If the cluster is unhealthy (for example, if its control plane is unreachable or the cluster failed to bootstrap), refer to Deleting an unhealthy user cluster.

Deleting a user cluster

To delete a user cluster, run the following command:

gkectl delete cluster \
--kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \
--cluster [CLUSTER_NAME]

where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.

If you are using the Seesaw bundled load balancer, delete the load balancer.

Known issue

In version 1.1.2, there is a known issue that results in this error if you are using a vSAN datastore:

Error deleting machine object xxx; Failed to delete machine xxx: failed
  to ensure disks detached: failed to convert disk path "" to UUID path:
  failed to convert full path "ds:///vmfs/volumes/vsan:52ed29ed1c0ccdf6-0be2c78e210559c7/":
  ServerFaultCode: A general system error occurred: Invalid fault

See the workaround in the release notes.

Deleting an unhealthy user cluster

You can pass in --force to delete a user cluster if the cluster is unhealthy. A user cluster might be unhealthy if its control plane is unreachable, if the cluster fails to bootstrap, or if gkectl delete cluster fails to delete the cluster.

To force delete a cluster:

gkectl delete cluster \
--kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \
--cluster [CLUSTER_NAME] \
--force

where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.

Cleaning up external resources

After a forced deletion, some resources might be leftover in F5 or vSphere. The following sections explain how to clean up these leftover resources.

Cleaning up a user cluster's VMs in vSphere

To verify that the user cluster's VMs are deleted, perform the following steps:

  1. From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.

  2. Find the resource pool for your admin cluster. This is the value of vCenter.resourcePool in your admin cluster configuration file.

  3. Under the resource pool, locate VMs prefixed with the name of your user cluster. These are the control-plane nodes for your user cluster. There will be one or three of these depending on whether your user cluster has a high-availability control plane.

  4. Find the resource pool for your user cluster. This is the value of vCenter.resourcePool in your user cluster configuration file. If your user cluster configuration file does not specify a resource pool, it is inherited from the admin cluster.

  5. Under the resource pool, locate VMs prefixed with the name of a node pool in your user cluster. These are the worker nodes in your user cluster.

  6. For each control-plane node and each worker node:

    1. From the vSphere Web Client, right-click the VM and select Power > Power Off.

    2. After the VM is powered off, right-click the VM and select Delete from Disk.

Cleaning up a user cluster's F5 partition

If there are any entries remaining in the user cluster's partition, perform the following steps:

  1. From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
  2. Select Local Traffic > Virtual Servers > Virtual Server List.
  3. In the Virtual Servers menu, remove all the virtual IPs.
  4. Select Pools, then delete all the pools.
  5. Select Nodes, then delete all the nodes.

After you have finished

After gkectl finishes deleting the user cluster, you can delete the user cluster's kubeconfig.