This topic provides an overview of commands called while using GKE on-prem. It is provided for convenience, and to supplement the GKE on-prem documentation.
Flags inside square brackets are optional. Placeholder variables are mutable.
kubectl
commands
See also kubectl
cheatsheet.
Set default kubeconfig file
export KUBECONFIG=[KUBECONFIG_PATH]
List clusters from default kubeconfig
kubectl get clusters
Pass in --kubeconfig [KUBECONFIG_PATH]
to view clusters
in a non-default kubeconfig.
List nodes in cluster from default kubeconfig
kubectl get nodes
Pass in --kubeconfig [KUBECONFIG_PATH]
to view clusters
in a non-default kubeconfig.
List all containers in all namespaces
kubectl get pods --all-namespaces -o jsonpath="{..image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c
gkectl
commands
See also gkectl
reference.
Diagnosing cluster issues using gkectl
Use gkectl diagnose
commands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Running gkectl
commands verbosely
-v5
Logging gkectl
errors to stderr
--alsologtostderr
Generate a GKE on-prem configuration file
gkectl create-config [--config [PATH]]
Validate a configuration file
gkectl check-config --config [PATH]
Push GKE on-prem images to your Docker registry, and initialize node OS image
gkectl prepare --config [CONFIG_FILE] [--validate-attestations]
Create clusters
gkectl create cluster --config [CONFIG_FILE]
Google Cloud service accounts
Create a service account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME] --project [PROJECT_ID]
Grant an IAM role to a service account
gcloud projects add-iam-policy-binding \ [PROJECT_ID] \ --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com" \ --role="[ROLE_NAME]"
Create a private key for a service account
gcloud iam service-accounts keys create [KEY_FILE_NAME] \ --iam-account [SERVICE_ACCOUNT_NAME]@[PROJECT-ID].iam.gserviceaccount.com \ --project [PROJECT_ID]
Activate a service account and execute gcloud
commands as that account
gcloud auth activate-service-account --key-file=[SERVICE_ACCOUNT_KEY_FILE]
Admin workstation
SSH in to admin workstation
From the directory containing your Terraform configuration files:
ssh -i ~/.ssh/vsphere_workstation ubuntu@$(terraform output ip_address)
or, if you want to just use its address:
ssh -i ~/.ssh/vsphere_workstation ubuntu@[IP_ADDRESS]
Copy files to an admin workstation
scp -i ~./ssh/vsphere_workstation [SOURCE_PATH] ubuntu@$(terraform output ip_address)
Locating gkectl
logs in the admin workstation
Even if you don't pass in its debugging flags, you can view
gkectl
logs in the following admin workstation directory:
/home/ubuntu/.config/gke-on-prem/logs
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-system
namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grep
or a similar tool to search for errors:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager
Clusters
Get IP addresses of an admin cluster's nodes
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get nodes --output wide
Get IP addresses of a user cluster's nodes
kubectl --kubeconfig [USER_CLUSTER_KUBECONFIG] get nodes --output wide
SSH in to cluster nodes
See Using SSH to connect to a cluster node.