You are viewing documentation for a previous version of GKE On-Prem. View the latest documentation.

Deleting a user cluster

This page describes how to delete a GKE On-Prem user cluster.

Overview

GKE On-Prem supports deletion of healthy user clusters via gkectl. If the cluster is unhealthy (for example, if its control plane is unreachable or the cluster failed to bootstrap), refer instead to Manually deleting a user cluster.

Deleting a user cluster

Run the following command:

gkectl delete cluster \
--kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \
--cluster [CLUSTER_NAME]

where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.

After you have finished

After gkectl finishes deleting the user cluster, delete the user cluster kubeconfig.

Known Issues

An additional user control plane VM might be created in vSphere after the cluster is deleted. Verify that all user cluster VMs are deleted by performing the following steps:

  1. From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
  2. Find your Resource Pool.
  3. There should be no VMs that are prefixed with your user cluster's name.

If there are user cluster VMs remaining, perform the following steps from the vSphere Web Client:

  1. Right-click the user cluster VM and select Power > Power Off.
  2. Once the VM is powered off, right-click the VM and select Delete from Disk.

Troubleshooting

For more information, refer to Troubleshooting.

Diagnosing cluster issues using gkectl

Use gkectl diagnosecommands to identify cluster issues and share cluster information with Google. See Diagnosing cluster issues.

Running gkectl commands verbosely

-v5

Logging gkectl errors to stderr

--alsologtostderr

Locating gkectl logs in the admin workstation

Even if you don't pass in its debugging flags, you can view gkectl logs in the following admin workstation directory:

/home/ubuntu/.config/syllogi/logs

Locating Cluster API logs in the admin cluster

If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:

  1. Find the name of the Cluster API controllers Pod in the kube-system namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
  2. Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use grep or a similar tool to search for errors:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager