[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-01 (世界標準時間)。"],[],[],null,["This page describes how to delete an admin cluster created with Google Distributed Cloud\n(software only) for VMware.\n\nBefore you begin\n\nBefore you delete an admin cluster, complete the following steps:\n\n- Delete its user clusters. See [Deleting a user cluster](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/deleting-a-user-cluster).\n- Delete any workloads that use [PodDisruptionBudgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) from the admin cluster.\n- Delete all external objects, such as PersistentVolumes, from the admin cluster.\n- Set a `KUBECONFIG` environment variable pointing to the kubeconfig of the\n admin cluster that you want to delete:\n\n ```\n export KUBECONFIG=ADMIN_CLUSTER_KUBECONFIG\n ```\n\n where \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e is the path of the admin cluster's\n kubeconfig file.\n- Note down the admin cluster name:\n\n ```\n kubectl get onpremadmincluster\n ```\n\nUnenrolling the admin cluster\n\nIf the admin cluster is enrolled in the GKE On-Prem API, you need to\nfirst unenroll it from the API. An admin cluster is enrolled in the API in\nthe following cases:\n\n- You [explicitly enroll](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/enroll-cluster) the cluster.\n- You [upgraded a user cluster using the Google Cloud CLI](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/upgrading#gcloud-cli), which automatically enrolls the admin cluster.\n- You created the admin cluster using Terraform.\n\n1. List all enrolled admin clusters in your project:\n\n ```\n gcloud container vmware admin-clusters list \\\n --project=PROJECT_ID \\\n --location=-\n ```\n\n Replace \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e with the ID of the fleet host project.\n\n The command outputs the name of each admin cluster that is enrolled in the\n GKE On-Prem API in the project, along with the Google Cloud region.\n\n When you set `--location=-`, that means to list all clusters in all\n regions. If you need to scope down the list, set `--location` to the\n region you specified when you enrolled the cluster.\n2. Unenroll the cluster from the GKE On-Prem API:\n\n ```\n gcloud container vmware admin-clusters unenroll ADMIN_CLUSTER_NAME \\\n --project=PROJECT_ID \\\n --location=REGION\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_NAME\u003c/var\u003e: The name of the admin cluster.\n - \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: The ID of the fleet host project.\n - \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: The Google Cloud region.\n\n This command removes the GKE On-Prem API resources from Google Cloud.\n\nDeleting logging and monitoring\n\nSkip this section if your cluster is at version 1.30 or higher. Because the\nlogging and monitoring custom resources aren't deployed on clusters at\nversion 1.30 and higher, if you run the commands, they won't return.\n\nGoogle Distributed Cloud's logging and monitoring Pods, deployed from\nStatefulSets, use PDBs that can prevent nodes from draining properly. To\nproperly delete an admin cluster, you need to delete these Pods.\n\nTo delete logging and monitoring Pods, run the following commands:\n\n```\nkubectl delete monitoring --all -n kube-system\nkubectl delete stackdriver --all -n kube-system\n```\n\nDeleting monitoring cleans up the PersistentVolumes (PVs) associated with\nStatefulSets, but the PersistentVolume for Stackdriver needs to be deleted\nseparately.\n\nDeletion of the Stackdriver PV is optional. If you choose not to delete the PV,\nrecord the location and name of the associated PV in an external location\noutside of the user cluster.\n\nDeletion of the PV will get propagated through deleting the Persistent Volume\nClaim (PVC).\n\nTo find the Stackdriver PVC, run the following command:\n\n```\nkubectl get pvc -n kube-system\n```\n\nTo delete the PVC, run the following command:\n\n```\nkubectl delete pvc -n kube-system PVC_NAME\n```\n\nVerifying logging \\& monitoring are removed\n\nTo verify that logging and monitoring have been removed, run the following\ncommands:\n\n```\nkubectl get pvc -n kube-system\nkubectl get statefulsets -n kube-system\n```\n\nCleaning up an admin cluster's F5 partition\n\nDeleting the `gke-system` namespace from the admin cluster ensures proper\ncleanup of the F5 partition, allowing you to reuse the partition for another\nadmin cluster.\n\nTo delete the `gke-system` namespace, run the following command:\n\n```\nkubectl delete ns gke-system\n```\n\nThen delete any remaining Services of type LoadBalancer. To list all Services,\nrun the following command:\n\n```\nkubectl get services --all-namespaces\n```\n| **Note:** The kube-apiserver Service should be the last Service you delete. After you delete this service, you will no longer be able to use kubectl to reach your cluster.\n\nFor each Service of type LoadBalancer, delete it by running the following\ncommand:\n\n```\nkubectl delete service SERVICE_NAME -n SERVICE_NAMESPACE\n```\n\nThen, from the F5 BIG-IP console:\n\n1. In the top-right corner of the console, switch to the partition to clean up.\n2. Select **Local Traffic** \\\u003e **Virtual Servers** \\\u003e **Virtual Server List**.\n3. In the **Virtual Servers** menu, remove all the virtual IPs.\n4. Select **Pools**, then delete all the pools.\n5. Select **Nodes**, then delete all the nodes.\n\nVerifying F5 partition is clean \n\nCLI\n\nCheck that the VIP is down by running the following command:\n\n```\nping -c 1 -W 1 F5_LOAD_BALANCER_IP; echo $?\n```\n\nwhich will return `1` if the VIP is down.\n\nF5 UI\n\nTo check that the partition has been cleaned up from the F5\nuser interface, perform the following steps:\n\n1. From the upper-right corner, click the **Partition** drop-down menu. Select your admin cluster's partition.\n2. From the left-hand **Main** menu, select **Local Traffic** \\\u003e **Network Map**. There should be nothing listed below the Local Traffic Network Map.\n3. From **Local Traffic** \\\u003e **Virtual Servers** , select **Nodes** , then select **Nodes List**. There should be nothing listed here as well.\n\nIf there are any entries remaining, delete them manually from the UI.\n\nPowering off admin node machines\n\nFirst, run this command to get the names of the machines, before you power them off.\n\n```\nkubectl get machines -o wide\n```\n\nThe output lists the names of the machines. You can now find them in the vSphere UI.\n\nTo delete the admin control plane node machines, you need to power off each of the\nremaining admin VMs in your vSphere resource pool. \n\nvSphere UI\n\nPerform the following steps:\n\n1. From the vSphere menu, select the VM from the vSphere resource pool.\n2. From the top of the VM menu, click **Actions**.\n3. Select **Power** \\\u003e **Power Off**. It may take a few minutes for the VM to power off.\n\nDeleting admin node machines\n\nAfter the VM has powered off, you can delete the VM. \n\nvSphere UI\n\nPerform the following steps:\n\n1. From the vSphere menu, select the VM from the vSphere resource pool.\n2. From the top of the VM menu, click **Actions**.\n3. Click **Delete from Disk**.\n\nDeleting the data disk\n\nAfter you have deleted the VMs, you can delete the data disk. The steps differ\nslightly depending on whether you have a highly-available (HA) or non-HA admin\ncluster.\n\nDo the following steps in the vSphere UI: \n\nNon-HA\n\n1. From the vSphere menu, select the data disk from the datastore as specified in the `vCenter.dataDisk` field in the admin cluster configuration file.\n2. From the middle of the datastore menu, click **Delete**.\n\nHA\n\nThe data disk paths for the 3 admin control plane machines are auto\ngenerated under `/anthos/ADMIN_CLUSTER/default/`, for example:\n\n```\n/anthos/ADMIN_CLUSTER_NAME/default/MACHINE_NAME-0-data.vmdk\n/anthos/ADMIN_CLUSTER_NAME/default/MACHINE_NAME-1-data.vmdk\n/anthos/ADMIN_CLUSTER_NAME/default/MACHINE_NAME-2-data.vmdk\n```\n\nDo the following steps to delete each data disk:\n\n1. From the vSphere menu, select the data disk from the datastore.\n2. From the middle of the datastore menu, click **Delete**.\n\nDeleting the `checkpoint.yaml` file\n\nIf you are deleting a HA admin cluster, skip this step because HA admin\nclusters don't support the checkpoint file.\n\nThe \u003cvar translate=\"no\"\u003eDATA_DISK_NAME-checkpoint.yaml\u003c/var\u003e [file](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/create-admin-cluster#managing_the_checkpointyaml_file), where \u003cvar translate=\"no\"\u003eDATA_DISK_NAME\u003c/var\u003e is the name of the data disk, is located in the same folder as the data disk. Delete this file.\n| **Note:** If \u003cvar translate=\"no\"\u003eDATA_DISK_NAME\u003c/var\u003e is too long, the file is instead called \u003cvar translate=\"no\"\u003eDATA_DISK_NAME.yaml\u003c/var\u003e.\n\nUnregistering the admin cluster\n\nWhen you create an admin cluster, you\n[register](/kubernetes-engine/distributed-cloud/vmware/docs/how-to/admin-cluster-configuration-file-latest#gkeconnect-section)\nthe cluster to a Google Cloud fleet. Run the following command to delete the fleet\nmembership, which unregisters the cluster:\n\n```\ngcloud container fleet memberships delete ADMIN_CLUSTER_NAME \\\n --project=PROJECT_ID \\\n --location=global\n```\n\nThis command removes the fleet membership recources from Google Cloud.\n\nAfter you have finished\n\nAfter you have finished deleting the admin cluster, delete its kubeconfig."]]