Deleting a user cluster

This page describes how to delete a GKE On-Prem user cluster.

Overview

GKE On-Prem supports deletion of user clusters via gkectl. If the cluster is unhealthy (for example, if its control plane is unreachable or the cluster failed to bootstrap), refer to Deleting an unhealthy user cluster.

Deleting a user cluster

To delete a user cluster, run the following command:

gkectl delete cluster \
--kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \
--cluster [CLUSTER_NAME]

where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.

Deleting an unhealthy user cluster

You can pass in --force to delete a user cluster if the cluster is unhealthy. A user cluster might be unhealthy if its control plane is unreachable, if the cluster fails to bootstrap, or if gkectl delete cluster fails to delete the cluster.

To force delete a cluster:

gkectl delete cluster \
--kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \
--cluster [CLUSTER_NAME] \
--force

where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.

Cleaning up external resources

After a forced deletion, some resources might be leftover in F5 or vSphere. The following sections explain how to clean up these leftover resources.

Cleaning up a user cluster's VMs in vSphere

To verify that the user cluster's VMs are deleted, perform the following steps:

  1. From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
  2. Find your Resource Pool.
  3. Verify whether there are VMs prefixed with your user cluster's name.

If there are user cluster VMs remaining, perform the following steps for each VM:

  1. From the vSphere Web Client, right-click the user cluster VM and select Power > Power Off.
  2. Once the VM is powered off, right-click the VM and select Delete from Disk.

Cleaning up a user cluster's F5 partition

If there are any entries remaining in the user cluster's partition, perform the following steps:

  1. From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
  2. Select Local Traffic > Virtual Servers > Virtual Server List.
  3. In the Virtual Servers menu, remove all the virtual IPs.
  4. Select Pools, then delete all the pools.
  5. Select Nodes, then delete all the nodes.

After you have finished

After gkectl finishes deleting the user cluster, you can delete the user cluster's kubeconfig.

Troubleshooting

For more information, refer to Troubleshooting.

Diagnosing cluster issues using gkectl

Use gkectl diagnosecommands to identify cluster issues and share cluster information with Google. See Diagnosing cluster issues.

Default logging behavior

For gkectl and gkeadm it is sufficient to use the default logging settings:

  • By default, log entries are saved as follows:

    • For gkectl, the default log file is /home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log, and the file is symlinked with the logs/gkectl-$(date).log file in the local directory where you run gkectl.
    • For gkeadm, the default log file is logs/gkeadm-$(date).log in the local directory where you run gkeadm.
  • All log entries are saved in the log file, even if they are not printed in the terminal (when --alsologtostderr is false).
  • The -v5 verbosity level (default) covers all the log entries needed by the support team.
  • The log file also contains the command executed and the failure message.

We recommend that you send the log file to the support team when you need help.

Specifying a non-default location for the log file

To specify a non-default location for the gkectl log file, use the --log_file flag. The log file that you specify will not be symlinked with the local directory.

To specify a non-default location for the gkeadm log file, use the --log_file flag.

Locating Cluster API logs in the admin cluster

If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:

  1. Find the name of the Cluster API controllers Pod in the kube-system namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
  2. Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use grep or a similar tool to search for errors:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager