Install kubectl and configure cluster access

This page explains how to install and configure the kubectl command-line tool to interact with your Google Kubernetes Engine (GKE) clusters.

Overview

kubectl is a command-line tool that you can use to interact with your GKE clusters. To use kubectl with GKE, you must install the tool and configure it to communicate with your clusters. Further kubectl configuration is required if you run multiple clusters in Google Cloud.

This page shows you the following:

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI.

Install kubectl

You can install kubectl using the Google Cloud CLI or an external package manager such as apt or yum.

gcloud

  1. Install the kubectl component:

    gcloud components install kubectl
    
  2. Verify that kubectl is installed:

    kubectl version
    

apt

  1. Verify that you have the cloud-sdk repository:

    grep -rhE ^deb /etc/apt/sources.list* | grep "cloud-sdk"
    

    The output is similar to the following:

    deb  [signed-by=/usr/share/keyrings/cloud.google.gpg]  https://packages.cloud.google.com/apt cloud-sdk main
    
  2. Install the kubectl component:

    apt-get update
    apt-get install -y kubectl
    
  3. Verify that kubectl is installed by checking it has the latest version:

    kubectl version --short --client
    

yum

  1. Verify that you have the cloud-sdk repository:

    yum repolist | grep "google-cloud-sdk"
    

    Output is similar to the following:

    google-cloud-sdk    Google Cloud SDK    2,205
    
  2. Install the kubectl component:

    yum install -y kubectl
    
  3. Verify that kubectl is installed:

    kubectl version --short --client
    

Install required plugins

kubectl and other Kubernetes clients require an authentication plugin, gke-gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters.

Before Kubernetes version 1.25 is released, gcloud CLI will start to require that the gke-gcloud-auth-plugin binary is installed. If not installed, existing installations of kubectl or other custom Kubernetes clients will stop working.

You must install this plugin to use kubectl and other clients to interact with GKE. Existing clients display an error message if the plugin is not installed.

Before you begin, check whether the plugin is already installed:

gke-gcloud-auth-plugin --version

If the output displays version information, skip this section.

You can install the authentication plugin using the gcloud CLI or an external package manager such as apt or yum.

gcloud

Install the gke-gcloud-auth-plugin binary:

  gcloud components install gke-gcloud-auth-plugin

apt

Install the gke-gcloud-auth-plugin binary:

  apt-get install google-cloud-sdk-gke-gcloud-auth-plugin

yum

Install the gke-gcloud-auth-plugin binary:

  yum install google-cloud-sdk-gke-gcloud-auth-plugin

Verify the gke-gcloud-auth-plugin binary installation:

  1. Check the gke-gcloud-auth-plugin binary version:

    gke-gcloud-auth-plugin --version
    
  2. Update the kubectl configuration to use the plugin:

    gcloud container clusters get-credentials CLUSTER_NAME
    

    Replace CLUSTER_NAME with the name of your cluster.

  3. Verify the configuration:

    kubectl get namespaces
    

    The output is similar to the following:

    NAME              STATUS   AGE
    default           Active   51d
    kube-node-lease   Active   51d
    kube-public       Active   51d
    kube-system       Active   51d
    

For more information about why this plugin is required, see the Kubernetes KEP.

Interact with kubectl

Kubernetes uses a YAML file called kubeconfig to store cluster authentication information for kubectl. By default, the file is saved at $HOME/.kube/config.

kubeconfig contains a group of access parameters called contexts. Each context contains a Kubernetes cluster, a user, and an optional default namespace. kubectl refers to contexts when running commands.

The following are tasks you can complete to configure kubectl:

  • Choose which cluster kubectl talks to.
  • Set a default cluster for kubectl by setting the current context in the kubeconfig file.
  • Run kubectl commands against a specific cluster using the --cluster flag.

View kubeconfig

To view your environment's kubeconfig, run the following command:

kubectl config view

The command returns a list of all clusters for which kubeconfig entries have been generated. If a GKE cluster is listed, you can run kubectl commands against it in your current environment. Otherwise, you need to Store cluster information for kubectl.

View the current context for kubectl

The current context is the cluster that is currently the default for kubectl. All kubectl commands run against that cluster.

When you create a cluster using gcloud container clusters create, an entry is automatically added to the kubeconfig file in your environment, and the current context changes to that cluster. For example:

gcloud container clusters create my-cluster
Creating my-cluster...done
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster

To view the current context for kubectl, run the following command:

kubectl config current-context

Store cluster information for kubectl

When you create a cluster using Google Cloud console or using gcloud CLI from a different computer, your environment's kubeconfig file is not updated. Additionally, if a project team member uses gcloud CLI to create a cluster from their computer, their kubeconfig is updated but yours is not. The kubeconfig entry contains either:

To generate a kubeconfig context in your environment, ensure that you have the container.clusters.get permission. The least-privileged IAM role that provides this permission is container.clusterViewer.

To generate a kubeconfig context for a specific cluster, run the following command:

gcloud container clusters get-credentials CLUSTER_NAME

Replace CLUSTER_NAME with the name of your cluster.

If the target cluster is not running in the default zone or region, or if you did not set a default zone or region, you need to supply the region (--region=REGION) or zone (--zone=ZONE) in the command.

Generate a kubeconfig entry using a private cluster's internal IP address

All clusters have a canonical endpoint. The endpoint exposes the Kubernetes API server that kubectl and other services use to communicate with your cluster control plane.

Private clusters have two separate endpoint IP addresses: privateEndpoint, which is an internal IP address, and publicEndpoint, which is an external external IP address. The endpoint field refers to the external IP address, unless public access to the endpoint is disabled, in which case the private IP address will be used.

For private clusters, if you prefer to use the internal IP as the endpoint, run the following command:

gcloud container clusters get-credentials CLUSTER_NAME --internal-ip

Replace CLUSTER_NAME with the name of your cluster.

Running get-credentials uses the IP specified in the endpoint field by default.

Set a default cluster for kubectl commands

If you have previously generated a kubeconfig entry for clusters, you can switch the current context for kubectl to that cluster by running the following command:

gcloud container clusters get-credentials CLUSTER_NAME

Replace CLUSTER_NAME with the name of your cluster.

For example, consider a project with two clusters, my-cluster and my-new- cluster. The current context is my-new-cluster, but you want to run all kubectl commands against my-cluster. To switch the current context from my-new-cluster to my-cluster, run the following command:

gcloud container clusters get-credentials my-cluster

Run individual kubectl commands against a specific cluster

You can run individual kubectl commands against a specific cluster by using --cluster=CLUSTER_NAME.

For example, consider an environment with two clusters, my-cluster and my- new-cluster, in which the current context is my-cluster. You want to deploy an application to my-new-cluster, but you don't want to change the current context. To deploy the application to my-new-cluster without changing the current context, you would run the following command:

kubectl run my-app --image us-docker.pkg.dev/my-project/my-repo/my-app:1.0 --cluster my-new-cluster

Troubleshooting

Insufficient authentication scopes

When you run gcloud container clusters get-credentials you receive the following error:

ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.

This error occurs because you are attempting to access the Kubernetes Engine API from a Compute Engine VM that does not have the cloud-platform scope. For instructions on changing the scopes on your Compute Engine VM instance, see Creating and enabling service accounts for instances.

ERROR: executable gke-gcloud-auth-plugin not found

If the following error is received while trying to run kubectl or custom clients interacting with GKE, install the gke-gcloud-auth-plugin as described in Installation instructions. The error messages are similar to the following:

  • Error sample
Unable to connect to the server: getting credentials: exec: executable gke-gcloud-auth-plugin not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:
      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

Visit cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin to install gke-gcloud-auth-plugin.
  • Error sample
Unable to connect to the server: getting credentials: exec: fork/exec /usr/lib/google-cloud-sdk/bin/gke-gcloud-auth-plugin: no such file or directory

ERROR: panic: no Auth Provider found for name gcp

The error no Auth Provider found for name "gcp" is received if kubectl or custom Kubernetes clients have been built with Kubernetes client-go version 1.25 or later, as described in How it works. This can be resolved by the following steps:

  1. Install gke-gcloud-auth-plugin as described in Installation instructions.

  2. Update to the latest version of the gcloud CLI using gcloud components update.

  3. Update the kubeconfig file.

    gcloud container clusters get-credentials CLUSTER_NAME
    

Replace the CLUSTER_NAME with the name of your cluster.

What's next

Try it for yourself

If you're new to Google Cloud, create an account to evaluate how GKE performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.

Try GKE free