This page describes how to delete a GKE On-Prem user cluster.
GKE On-Prem supports deletion of healthy user clusters via
If the cluster is unhealthy (for example, if its control plane is unreachable or
the cluster failed to bootstrap), refer instead to
Manually deleting a user cluster.
Deleting a user cluster
Run the following command:
gkectl delete cluster \ --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \ --cluster [CLUSTER_NAME]
where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.
After you have finished
After gkectl finishes deleting the user cluster, delete the user cluster kubeconfig.
An additional user control plane VM might be created in vSphere after the cluster is deleted. Verify that all user cluster VMs are deleted by performing the following steps:
- From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
- Find your Resource Pool.
- There should be no VMs that are prefixed with your user cluster's name.
If there are user cluster VMs remaining, perform the following steps from the vSphere Web Client:
- Right-click the user cluster VM and select Power > Power Off.
- Once the VM is powered off, right-click the VM and select Delete from Disk.
For more information, refer to Troubleshooting.
Diagnosing cluster issues using
gkectl diagnosecommands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
gkectl commands verbosely
gkectl errors to
gkectl logs in the admin workstation
Even if you don't pass in its debugging flags, you can view
gkectl logs in the following admin workstation directory:
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-systemnamespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grepor a similar tool to search for errors:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager