This page describes how to delete an Google Distributed Cloud user cluster. User cluster deletion unregisters the cluster from the fleet and deletes the workloads, node pools, control plane nodes, and the corresponding resources, like VMs, F5 partitions, and data disks.
Choose a tool to delete a cluster
How you delete a user cluster depends on whether the cluster is enrolled in the Anthos On-Prem API. A user cluster is enrolled in the Anthos On-Prem API if one of the following is true:
The cluster was created by using the Google Cloud console, the Google Cloud CLI (gcloud CLI), or Terraform, which automatically enrolls the cluster in the Anthos On-Prem API. Collectively, these standard applications are referred to as Anthos On-Prem API clients.
The cluster was created using
gkectl
, but it was enrolled in the Anthos On-Prem API.
If the cluster is enrolled in the Anthos On-Prem API, use an Anthos On-Prem API
client to delete the cluster. If the cluster isn't enrolled in the
Anthos On-Prem API, use gkectl
on the admin workstation to delete the cluster.
To find all user clusters that are enrolled in the Anthos On-Prem API in a specific project, run the following command:
gcloud container vmware clusters list \ --project=PROJECT_ID \ --location=-
When you set --location=-
, that means to list all clusters in all regions.
If you need to scope down the list, you can set --location
to a specific
region.
If the cluster is listed, it is enrolled in the Anthos On-Prem API. If you aren't
a project owner, minimally, you must be granted the Identity and Access Management
role roles/gkeonprem.admin
on the project to delete enrolled clusters. For
details on the permissions included in this role, see
GKE on-prem roles
in the IAM documentation.
Delete a user cluster
gkectl
You can use gkectl
to delete clusters that aren't enrolled in the
Anthos On-Prem API. If your organization's proxy and firewall rules allow
traffic to reach gkeonprem.googleapis.com
and
gkeonprem.mtls.googleapis.com
(the service names for the Anthos On-Prem API),
then gkectl
will be able to delete enrolled clusters.
Run the following command on the admin workstation to delete the cluster:
gkectl delete cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --cluster CLUSTER_NAME
where
ADMIN_CLUSTER_KUBECONFIG
is the path to the admin cluster's kubeconfig file.CLUSTER_NAME
is the name of the user cluster you want to delete.
If deletion fails with a message similar to the following:
Exit with error: ... failed to unenroll user cluster CLUSTER_NAME failed to create GKE On-Prem API client
That means the cluster is enrolled, but gkectl
was unable to reach the
Anthos On-Prem API. In this case, the easiest thing to do is use an
Anthos On-Prem API client to delete the cluster.
If deleting the user cluster fails halfway, you can run gkectl
with the
--force
flag to ignore the halfway error and continue the deletion.
gkectl delete cluster \ --kubeconfig ADMIN_CLUSTER_KUBECONFIG \ --cluster CLUSTER_NAME \ --force
If you are using the Seesaw bundled load balancer, delete the load balancer.
Google Cloud console
If the user cluster is managed by the Anthos On-Prem API do the following steps to delete the cluster:
In the Google Cloud console, go to the GKE Enterprise clusters page.
Select the Google Cloud project that the user cluster is in.
In the list of clusters, click the cluster that you want to delete.
In the Details panel, if the Type is vm Anthos (VMware) do the following steps to delete the cluster using the Google Cloud console:
In the Details panel, click More details.
Near the top of the window, click
Delete.When prompted to confirm, click Delete again.
If the Type is external, this indicates that the cluster was created using
gkectl
. In this case, usegkectl
to delete the cluster.
gcloud CLI
If the user cluster is managed by the Anthos On-Prem API, do the following on a computer that has the gcloud CLI installed:
Log in with your Google account:
gcloud auth login
Update components:
gcloud components update
Get a list of clusters to help ensure that you specify the correct cluster name in the delete command:
gcloud container vmware clusters list \ --project=FLEET_HOST_PROJECT_ID \ --location=LOCATION
Replace the following:
FLEET_HOST_PROJECT_ID
: The ID of the project that the cluster is registered to.LOCATION
: The Google Cloud location associated with the user cluster.
The output is similar to the following:
NAME LOCATION VERSION ADMIN_CLUSTER. STATE example-user-cluster-1a us-west1 1.14.1-gke.39 example-admin-cluster-1 RUNNING
Use the following command to delete the cluster:
gcloud container vmware clusters delete USER_CLUSTER_NAME \ --project=FLEET_HOST_PROJECT_ID \ --location=LOCATION \ --force \ --allow-missing
Replace the following:
USER_CLUSTER_NAME
: The name of the user cluster to delete.FLEET_HOST_PROJECT_ID
: The ID of the project that the cluster is registered to.LOCATION
: The Google Cloud location associated with the user cluster.
The
--force
flag lets you delete a cluster that has node pools. Without the--force
flag, you have to delete the node pools first, and then delete the cluster.The
--allow-missing
flag is a standard Google API flag. When you include this flag, the command returns success if the cluster isn't found.If the command returns an error that contains the text
failed connecting to the cluster's control plane
, this indicates connectivity issues with either the admin cluster, the Connect Agent, or the on-premises environment.If you think the connectivity issue is transient, for example, because of network problems, wait and retry the command.
If retrying the command continues to fail, see Collecting Connect Agent logs to troubleshoot issues with the Connect Agent.
If you know that the admin cluster has been deleted, of if the VMs for the admin or the user cluster have been shut down or are otherwise inaccessible, include the
--ignore-errors
flag and retry the command.
For information about other flags, see the gcloud CLI reference.
Clean up resources
If there were issues when you deleted the cluster, some F5 or vSphere resources might be left over. The following sections explain how to clean up these leftover resources.
Clean up a user cluster's VMs in vSphere
To verify that the user cluster's VMs are deleted, perform the following steps:
From the vSphere Web Client's left-hand Navigator menu, click the Hosts and Clusters menu.
Find the resource pool for your admin cluster. This is the value of
vCenter.resourcePool
in your admin cluster configuration file.Under the resource pool, locate VMs prefixed with the name of your user cluster. These are the control-plane nodes for your user cluster. There will be one or three of these depending on whether your user cluster has a high-availability control plane.
Find the resource pool for your user cluster. This is the value of
vCenter.resourcePool
in your user cluster configuration file. If your user cluster configuration file does not specify a resource pool, it is inherited from the admin cluster.Under the resource pool, locate VMs prefixed with the name of a node pool in your user cluster. These are the worker nodes in your user cluster.
For each control-plane node and each worker node:
From the vSphere Web Client, right-click the VM and select Power > Power Off.
After the VM is powered off, right-click the VM and select Delete from Disk.
Clean up a user cluster's F5 partition
If there are any entries remaining in the user cluster's partition, perform the following steps:
- From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
- Select Local Traffic > Virtual Servers > Virtual Server List.
- In the Virtual Servers menu, remove all the virtual IPs.
- Select Pools, then delete all the pools.
- Select Nodes, then delete all the nodes.
After you have finished
After the cluster is deleted, you can delete the user cluster's kubeconfig.