[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[[["\u003cp\u003eDeleting a user cluster requires the \u003ccode\u003euser-cluster-admin\u003c/code\u003e role and a Kubernetes version of 1.26 or greater when using the GDC console or API, otherwise \u003ccode\u003ekubectl\u003c/code\u003e CLI is necessary.\u003c/p\u003e\n"],["\u003cp\u003eThe GDC console provides a straightforward method for deleting a cluster by selecting it from the cluster list and using the delete option, which requires a confirmation phrase.\u003c/p\u003e\n"],["\u003cp\u003eDeleting a user cluster via \u003ccode\u003ekubectl\u003c/code\u003e involves multiple steps, including pausing reconciliation, deleting various custom resources (like \u003ccode\u003eCluster\u003c/code\u003e, \u003ccode\u003eNodePoolClaim\u003c/code\u003e, and \u003ccode\u003enamespace\u003c/code\u003e), managing Istio secrets, and removing address pool claims.\u003c/p\u003e\n"],["\u003cp\u003eWhen using the \u003ccode\u003ekubectl\u003c/code\u003e method, you can expect errors from the command that attempts to delete all address pool claims across all namespaces, because some namespaces may not contain the claim.\u003c/p\u003e\n"],["\u003cp\u003eThe API method of deleting a user cluster involves removing the \u003ccode\u003eCluster\u003c/code\u003e custom resource from the GDC instance using a \u003ccode\u003ekubectl delete\u003c/code\u003e command, and this process can take up to 20 minutes, or can be ran in the background.\u003c/p\u003e\n"]]],[],null,["# Delete a user cluster\n\nTo delete a user cluster, you must have the User Cluster Admin role\n(`user-cluster-admin` role).\n\nComplete the following steps to delete a user cluster:\n**Important:** Your cluster's Kubernetes version must be 1.26 or greater to delete it using the GDC console or API. If your Kubernetes version is less than 1.26, you must use the `kubectl` CLI. \n\n### Console\n\n1. In the navigation menu, select **Clusters**.\n\n2. In the cluster list, click the cluster that you want to delete.\n\n3. Click delete **Delete\n Cluster**.\n\n4. When prompted, type the given confirmation phrase and click **Delete** to\n delete the cluster.\n\n### `kubectl`\n\n1. Pause the reconciliation for the GDCH `Cluster` custom resource of the user cluster:\n\n kubectl annotate clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n cluster.gdc.goog/paused=true --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n2. Trigger the deletion of the GDCH `Cluster` custom resource of the user cluster:\n\n kubectl delete clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e --wait=false\n\n3. Start the deletion of all `NodePoolClaim` custom resources in the user cluster:\n\n kubectl delete --all nodepoolclaims -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e --wait=false\n\n This command starts the background deletion of all node pool claims in the user cluster.\n4. Delete the `Cluster` custom resource of the user cluster:\n\n kubectl delete clusters \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n This command might take several minutes based on the amount of node pools in\n the user cluster to delete.\n5. Delete the namespace custom resource:\n\n kubectl --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e delete namespace \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e\n\n6. Delete the Istio secret in the `istio-system` namespace:\n\n kubectl delete secrets istio-remote-secret-\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n istio-system \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n In some cases, your Istio secret might have a different name. To list your\n Istio secret and confirm the name, run the following command: \n\n kubectl get secrets -n istio-system \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n7. Remove the address pool claims with the same name as the target user\n cluster, but located in different namespaces:\n\n for j in $(kubectl get addresspoolclaims -A -o custom-columns=:.metadata.namespace --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e);\n do\n kubectl delete addresspoolclaims \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n $j --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e;\n done\n\n You can expect to see errors like the following after executing this\n command: \n\n Error from server (NotFound): addresspoolclaims.system.private.gdc.goog \"USER_CLUSTER_NAME\" not found\n\n Ignore these errors. The command attempts to find all address pool claims\n with the specified cluster name in all namespaces. Some namespaces do not\n contain address pool claims with the name, resulting in an error.\n8. Verify that you deleted the namespace of the user cluster:\n\n kubectl get namespaces \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n If the namespace is deleted, the output displays an error indicating the\n namespace is not found. For example: \n\n Error from server (NotFound): namespaces \u003cvar class=\"readonly\" translate=\"no\"\u003eNAMESPACE\u003c/var\u003e not found\n\n9. Unpause the reconciliation of the GDCH `Cluster` custom resource of the user cluster:\n\n kubectl annotate clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n cluster.gdc.goog/paused- --kubeconfig=\u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n### API\n\n- To delete a user cluster, remove the `Cluster` custom resource from the\n GDC instance:\n\n kubectl delete clusters.cluster.gdc.goog/\u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e -n platform \\\n --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n | **Note:** The deletion can take up to 20 minutes. If you want the deletion process to run in the background, append the `--wait=false` parameter to the command.\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eUSER_CLUSTER_NAME\u003c/var\u003e: The name of the user cluster to delete.\n - \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e: The admin cluster's kubeconfig file path."]]