This page describes how to delete a GKE On-Prem admin cluster.
Before you begin
Before you delete an admin cluster, complete the following steps:
- Delete its user clusters. See Deleting a user cluster.
- Delete any workloads that use PodDisruptionBudgets (PDBs) from the admin cluster.
- Delete all external objects, such as PersistentVolumes, from the admin cluster.
Set a
KUBECONFIG
environment variable pointing to the kubeconfig of the admin cluster that you want to delete:export KUBECONFIG=[ADMIN_CLUSTER_KUBECONFIG]
where [ADMIN_CLUSTER_KUBECONFIG] is the path of the admin cluster's kubeconfig file.
Deleting logging and monitoring
GKE On-Prem's logging and monitoring Pods, deployed from StatefulSets, use PDBs that can prevent nodes from draining properly. To properly delete an admin cluster, you need to delete these Pods.
To delete logging and monitoring Pods, run the following commands:
kubectl delete monitoring --all -n kube-system kubectl delete stackdriver --all -n kube-system
Deleting monitoring cleans up the PersistentVolumes (PVs) associated with StatefulSets, but the PersistentVolume for Stackdriver needs to be deleted separately.
Deletion of the Stackdriver PV is optional. If you choose not to delete the PV, record the location and name of the associated PV in an external location outside of the user cluster.
Deletion of the PV will get propagated through deleting the Persistent Volume Claim (PVC).
To find the Stackdriver PVC, run the following command:
kubectl get pvc -n kube-system
To delete the PVC, run the following command:
kubectl delete pvc -n kube-system [PVC_NAME]
Verifying logging & monitoring are removed
To verify that logging and monitoring have been removed, run the following commands:
kubectl get pvc -n kube-system kubectl get statefulsets -n kube-system
Cleaning up an admin cluster's F5 partition
Deleting the gke-system
namespace from the admin cluster ensures proper
cleanup of the F5 partition, allowing you to reuse the partition for another
admin cluster.
To delete the gke-system
namespace, run the following command:
kubectl delete ns gke-system
Then delete any remaining Services of type LoadBalancer. To list all Services, run the following command:
kubectl get services --all-namespaces
For each Service of type LoadBalancer, delete it by running the following command:
kubectl delete service [SERVICE_NAME] -n [SERVICE_NAMESPACE]
Then, from the F5 BIG-IP console:
- In the top-right corner of the console, switch to the partition to clean up.
- Select Local Traffic > Virtual Servers > Virtual Server List.
- In the Virtual Servers menu, remove all the virtual IPs.
- Select Pools, then delete all the pools.
- Select Nodes, then delete all the nodes.
Verifying F5 partition is clean
CLI
Check that the VIP is down by running the following command:
ping -c 1 -W 1 [F5_LOAD_BALANCER_IP]; echo $?
which will return 1
if the VIP is down.
F5 UI
To check that the partition has been cleaned up from the F5 user interface, perform the following steps:
- From the upper-right corner, click the Partition drop-down menu. Select your admin cluster's partition.
- From the left-hand Main menu, select Local Traffic > Network Map. There should be nothing listed below the Local Traffic Network Map.
- From Local Traffic > Virtual Servers, select Nodes, then select Nodes List. There should be nothing listed here as well.
If there are any entries remaining, delete them manually from the UI.
Powering off admin node machines
To delete the admin control plane node machines, you need to power off each of the remaining admin VMs in your vSphere resource pool.
vSphere UI
Perform the following steps:
- From the vSphere menu, select the VM from the Vsphere resource pool
- From the top of the VM menu, click Actions.
- Select Power > Power Off. It may take a few minutes for the VM to power off.
Deleting admin node machines
After the VM has powered off, you can delete the VM.
vSphere UI
Perform the following steps:
- From the vSphere menu, select the VM from the Vsphere resource pool
- From the top of the VM menu, click Actions.
- Click Delete from Disk.
After you have finished
After you have finished deleting the admin cluster, delete its kubeconfig.
Troubleshooting
For more information, refer to Troubleshooting.
Diagnosing cluster issues using gkectl
Use gkectl diagnose
commands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Default logging behavior
For gkectl
and gkeadm
it is sufficient to use the
default logging settings:
-
By default, log entries are saved as follows:
-
For
gkectl
, the default log file is/home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log
, and the file is symlinked with thelogs/gkectl-$(date).log
file in the local directory where you rungkectl
. -
For
gkeadm
, the default log file islogs/gkeadm-$(date).log
in the local directory where you rungkeadm
.
-
For
- All log entries are saved in the log file, even if they are not printed in
the terminal (when
--alsologtostderr
isfalse
). - The
-v5
verbosity level (default) covers all the log entries needed by the support team. - The log file also contains the command executed and the failure message.
We recommend that you send the log file to the support team when you need help.
Specifying a non-default location for the log file
To specify a non-default location for the gkectl
log file, use
the --log_file
flag. The log file that you specify will not be
symlinked with the local directory.
To specify a non-default location for the gkeadm
log file, use
the --log_file
flag.
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-system
namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grep
or a similar tool to search for errors:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager