Logging in to a cluster from the Cloud Console

This page explains how to log in to registered Kubernetes clusters outside Google Cloud from the Google Cloud Console.

After you log in to a registered cluster, you can inspect the cluster and get details about its nodes in the Cloud Console, just like a Google Kubernetes Engine (GKE) cluster on Google Cloud. You can choose from several authentication methods to log in to your clusters depending on the needs of your project and organization.

To connect to registered clusters from the command line, see Connecting to registered clusters with the Connect gateway and Accessing clusters with Anthos Identity Service.

About registered clusters

Registering Kubernetes clusters to your project fleet (formerly known as a project environ) provides a unified way to view and manage multiple clusters and their workloads, including viewing them together in the Cloud Console. Viewing clusters outside Google Cloud requires you to enable Anthos.

Managed Anthos clusters outside Google Cloud (such as on-premises or on AWS) are registered automatically to your project fleet when you create them. Attached clusters must be registered manually. You might also want to manually register a cluster if you've unregistered it by mistake, or you want to register it to a different project. If you need to register a cluster, follow the instructions in Registering a cluster.

After you register a cluster, it appears on the GKE and Anthos clusters pages in the Cloud Console. However, to see more details such as nodes and workloads for any cluster outside Google Cloud, you need to log in and authenticate to the cluster, as described in the rest of this guide. Clusters that require login in the Google Kubernetes Engine clusters list show an orange warning triangle and prompt you to log in.

Screenshot of Google Kubernetes Engine clusters list

You can find out more about fleets in Introducing fleets. You can find out more about how cluster registration and the Connect Agent work in the Connect documentation.

Logging in using your Google Cloud identity

This recommended option lets you log in to registered clusters by using the same Google Cloud identity that you use for your projects and GKE clusters, using the Connect service to forward your requests to the cluster's API server. To find out more about how the Connect service works, see Connecting to registered clusters with the Connect gateway.

Before you begin, ensure that your platform admin has performed the necessary setup to let you use your Google Cloud identity to log in, including granting you all the necessary roles and role-based access control (RBAC) permissions to view and authenticate to registered clusters.

Console

To use your Google Cloud identity to log in to a cluster, follow these steps:

  1. In the Cloud Console, go to the GKE clusters page.

    Go to GKE clusters

  2. In the list of clusters, click Actions next to the registered cluster, and then click Login.

  3. Select Use your Google identity to log in.

  4. Click Login.

Logging in using a bearer token

You can log in to registered clusters by using a bearer token. Many kinds of bearer tokens, as specified in Kubernetes Authentication, are supported. The easiest method is to create a Kubernetes service account (KSA) in the cluster, and use its bearer token to log in.

All accounts that log in to a cluster need to hold at least the following Kubernetes RBAC roles in the cluster:

Before you begin

You or your platform administrator need to complete the following step once per registered cluster.

Create and apply the cloud-console-reader RBAC role

Users who want to view your cluster's resources in the Cloud Console need to have the relevant permissions to do so. You define this set of permissions by creating a ClusterRole RBAC resource, cloud-console-reader, in the cluster.

cloud-console-reader grants its users the get, list, and watch permissions on the cluster's nodes, persistent volumes, and storage classes, which allow them to see details about these resources. You can then bind this ClusterRole to the user's service account, as described in the next section.

kubectl

To create the cloud-console-reader ClusterRole and apply it, run the following command:

cat <<EOF > cloud-console-reader.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cloud-console-reader
rules:
- apiGroups: [""]
  resources: ["nodes", "persistentvolumes"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
EOF
kubectl apply -f cloud-console-reader.yaml

Setting up a service account

We recommend that you or a cluster administrator create a service account for each user logging in to the cluster. Using a bearer token is like using a password, so each user should have their own account. Logging in with the KSA's bearer token causes all operations to be executed as the KSA, restricted by the RBAC roles held by the KSA.

The KSA needs to hold at least the following RBAC roles in the cluster:

Create and authorize a KSA

kubectl

To create a KSA and bind permissions to it, follow these steps:

  1. Create the KSA and ClusterRoleBinding resources to bind the view and cloud-console-reader Kubernetes RBAC ClusterRoles to the KSA:

    KSA_NAME=KSA_NAME
    kubectl create serviceaccount ${KSA_NAME}
    kubectl create clusterrolebinding VIEW_BINDING_NAME \
    --clusterrole view --serviceaccount default:${KSA_NAME}
    kubectl create clusterrolebinding CLOUD_CONSOLE_READER_BINDING_NAME \
    --clusterrole cloud-console-reader --serviceaccount default:${KSA_NAME}
    

    Replace the following:

    • KSA_NAME: the name that you choose for the KSA
    • VIEW_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can name it anything you want, but you might find it easiest to name it after the KSA
    • CLOUD_CONSOLE_READER_BINDING_NAME: the name that you choose for the cloud-console-reader ClusterRoleBinding resource; you can also name this anything you want
  2. Depending on what access the service account should have, bind additional roles to the KSA. For options, see the Kubernetes default roles.

    For example, if you want to deploy a Kubernetes application from Cloud Marketplace, bind the cluster-admin role to the KSA:

    kubectl create clusterrolebinding BINDING_NAME \
    --clusterrole cluster-admin --serviceaccount default:KSA_NAME
    

    Replace BINDING_NAME with the name of the cluster role binding for the service account.

Authorize other accounts

kubectl

For every other user or service account gaining access to the cluster, create ClusterRoleBinding resources to bind the view and cloud-console-reader roles to their account:

  1. Bind the view and cloud-console-reader ClusterRoles:

    ACCOUNT_NAME=ACCOUNT_NAME
    kubectl create clusterrolebinding VIEW_BINDING_NAME \
    --clusterrole view --serviceaccount default:${ACCOUNT_NAME}
    kubectl create clusterrolebinding CLOUD_CONSOLE_READER_BINDING_NAME \
    --clusterrole cloud-console-reader --serviceaccount default:${ACCOUNT_NAME}
    

    Replace the following:

    • ACCOUNT_NAME: the Kubernetes service account
    • VIEW_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can name it anything you want, but you might find it easiest to name it after the user or service account
    • CLOUD_CONSOLE_READER_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can also name this anything you want
  2. Bind additional roles, depending on what access the account should have. For options, see the Kubernetes default roles.

    For example, to bind the cluster-admin role, run the following command:

    kubectl create clusterrolebinding BINDING_NAME \
    --clusterrole cluster-admin --serviceaccount default:ACCOUNT_NAME
    

    Replace BINDING_NAME with the name of the cluster role binding for the service account.

Get the KSA's bearer token

kubectl

To acquire the KSA's bearer token, run the following command:

SECRET_NAME=$(kubectl get serviceaccount KSA_NAME -o jsonpath='{$.secrets[0].name}')
kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode

Replace KSA_NAME with the name that you choose for the KSA.

From this command's output, copy the token and save it for use in the next section.

Logging in to a cluster

Console

To use the token to log in to a cluster, follow these steps:

  1. In the Cloud Console, go to the GKE clusters page.

    Go to GKE clusters

  2. In the list of clusters, click Actions next to the registered cluster, and then click Login.

  3. Select Token, and then fill in the Token field with the KSA's bearer token.

  4. Click Login.

Logging in using basic authentication

Console

To use basic authentication to log in to a cluster, follow these steps:

  1. In the Cloud Console, go to the GKE clusters page.

    Go to GKE clusters

  2. In the list of clusters, click Actions next to the registered cluster, and then click Login.

  3. Select Basic authentication, and then fill in the Username and Password fields.

  4. Click Login.

Logging in using OpenID Connect (OIDC)

If your cluster is configured to use an OIDC identity provider, you can use this to authenticate to the cluster from the Cloud Console. You can find out how to set up OIDC for Anthos clusters in the following guides:

Console

To use OIDC to log in to a configured cluster, follow these steps:

  1. In the Cloud Console, go to the GKE clusters page.

    Go to GKE clusters

  2. In the list of clusters, click Actions next to the registered cluster, and then click Login.

  3. Select Authenticate with identity provider configured for the cluster. You are redirected to your identity provider, where you might need to log in or consent to the Cloud Console accessing your account.

  4. Click Login.

What's next