GKE on-prem cheatsheet

This topic provides an overview of commands called while using GKE on-prem. It is provided for convenience, and to supplement the GKE on-prem documentation.

Flags inside square brackets are optional. Placeholder variables are mutable.

kubectl commands

See also kubectl cheatsheet.

Set default kubeconfig file

export KUBECONFIG=[KUBECONFIG_PATH]

List clusters from default kubeconfig

kubectl get clusters

Pass in --kubeconfig [KUBECONFIG_PATH] to view clusters in a non-default kubeconfig.

List nodes in cluster from default kubeconfig

kubectl get nodes

Pass in --kubeconfig [KUBECONFIG_PATH] to view clusters in a non-default kubeconfig.

List all containers in all namespaces

kubectl get pods --all-namespaces -o jsonpath="{..image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c

gkectl commands

See also gkectl reference.

Diagnosing cluster issues using gkectl

Use gkectl diagnosecommands to identify cluster issues and share cluster information with Google. See Diagnosing cluster issues.

Generate a GKE on-prem configuration file

gkectl create-config [--config [PATH]]

Validate a configuration file

gkectl check-config --config [PATH]

Push GKE on-prem images to your Docker registry, and initialize node OS image

gkectl prepare --config [CONFIG_FILE] [--validate-attestations]

Create clusters

gkectl create cluster --config [CONFIG_FILE]

Google Cloud service accounts

Create a service account

gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME] --project [PROJECT_ID]

Grant an IAM role to a service account

gcloud projects add-iam-policy-binding \
    [PROJECT_ID] \
    --member="serviceAccount:[SERVICE_ACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com" \
    --role="[ROLE_NAME]"

Create a private key for a service account

gcloud iam service-accounts keys create [KEY_FILE_NAME] \
--iam-account [SERVICE_ACCOUNT_NAME]@[PROJECT-ID].iam.gserviceaccount.com \
--project [PROJECT_ID]

Activate a service account and execute gcloud commands as that account

gcloud auth activate-service-account --key-file=[SERVICE_ACCOUNT_KEY_FILE]

Admin workstation

SSH in to admin workstation

From the directory containing your Terraform configuration files:

ssh -i ~/.ssh/vsphere_workstation ubuntu@$(terraform output ip_address)

or, if you want to just use its address:

ssh -i ~/.ssh/vsphere_workstation ubuntu@[IP_ADDRESS]

Copy files to an admin workstation

scp -i ~./ssh/vsphere_workstation [SOURCE_PATH] ubuntu@$(terraform output ip_address)

Default logging behavior

For gkectl and gkeadm it is sufficient to use the default logging settings:

  • By default, log entries are saved as follows:

    • For gkectl, the default log file is /home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log, and the file is symlinked with the logs/gkectl-$(date).log file in the local directory where you run gkectl.
    • For gkeadm, the default log file is logs/gkeadm-$(date).log in the local directory where you run gkeadm.
  • All log entries are saved in the log file, even if they are not printed in the terminal (when --alsologtostderr is false).
  • The -v5 verbosity level (default) covers all the log entries needed by the support team.
  • The log file also contains the command executed and the failure message.

We recommend that you send the log file to the support team when you need help.

Specifying a non-default location for the log file

To specify a non-default location for the gkectl log file, use the --log_file flag. The log file that you specify will not be symlinked with the local directory.

To specify a non-default location for the gkeadm log file, use the --log_file flag.

Locating Cluster API logs in the admin cluster

If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:

  1. Find the name of the Cluster API controllers Pod in the kube-system namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
  2. Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use grep or a similar tool to search for errors:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager

Clusters

Get IP addresses of an admin cluster's nodes

kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] get nodes --output wide

Get IP addresses of a user cluster's nodes

kubectl --kubeconfig [USER_CLUSTER_KUBECONFIG] get nodes --output wide

SSH in to cluster nodes

See Using SSH to connect to a cluster node.

What's next