Work with clusters from the Google Cloud console

After they have been added to your fleet, all fleet clusters appear in the Google Cloud console. The Google Cloud console offers a central user interface for managing all your Kubernetes clusters and their resources no matter where they are running. All your resources are shown in a single dashboard, and it's easy to get visibility into your workloads across multiple Kubernetes clusters.

For GKE clusters on Google Cloud, you don't need to do anything else to see cluster details such as nodes and workloads, provided you have been granted the relevant permissions. You can find out more about working with Google Cloud clusters in the Google Cloud console in the GKE documentation. However, if your project has enabled the entire Anthos platform and your fleet includes clusters outside Google Cloud, you need to follow some extra steps to log in to these clusters and view their details in the Google Cloud console. You can choose from several authentication methods to log in to your clusters depending on the needs of your project and organization.

The rest of this page explains how to log in to fleet clusters outside Google Cloud from the Google Cloud console. To connect to fleet clusters from the command line, see Connecting to registered clusters with the Connect gateway and Accessing clusters with Anthos Identity Service.

If you are a platform admin, you can also find an overview of setting up authentication to fleet clusters in Secure your fleet.

Required roles

If you registered the clusters yourself or are a project owner, you already have IAM roles that let you view registered clusters in the Google Cloud console.

For other users, the following roles provide the minimum permissions to view clusters and their resources in the Google Cloud console:

  • roles/gkehub.viewer
  • roles/container.viewer

Ensure that your platform admin has granted you the required roles before trying to log in to a cluster following this guide.

To grant a user the required roles, run the following commands:

gcloud projects add-iam-policy-binding FLEET_HOST_PROJECT_ID \
   --member user:USER_EMAIL_ADDRESS \
   --role='roles/gkehub.viewer'
gcloud projects add-iam-policy-binding FLEET_HOST_PROJECT_ID \
   --member user:USER_EMAIL_ADDRESS \
   --role='roles/container.viewer'

where:

  • FLEET_HOST_PROJECT_ID is the Google Cloud project ID in which the clusters are registered. Learn how to find this value.
  • USER_EMAIL_ADDRESS is email address of an authenticated user.

View registered clusters

After you register a cluster to your project fleet, it appears on the GKE and Anthos Clusters pages in the Google Cloud console. However, to see more details such as nodes and workloads for any cluster outside Google Cloud, you need to log in and authenticate to the cluster, as described in the rest of this guide. Clusters that require login in the clusters list (on either page) show an orange warning triangle and prompt you to log in. The following example shows the GKE Clusters page with two clusters outside Google Cloud that require login.

Screenshot of Google Kubernetes Engine clusters list

After you log in to a fleet cluster, you can select the cluster and view cluster details, just like a Google Kubernetes Engine cluster on Google Cloud. Note that from the Anthos page you must select the cluster, then select More details in GKE to view the cluster's nodes and workloads.

Log in using your Google Cloud identity

This option lets you log in to registered clusters by using the same Google Cloud identity that you use for your projects and GKE clusters, using the Connect service to forward your requests to the cluster's API server.

Before you begin

  • Ensure that your platform admin has applied the necessary role-based access control (RBAC) policies on the cluster to let you use your Google Cloud identity to log in. See the instructions in Configure RBAC policies.

Log in to a cluster

Console

To use your Google Cloud identity to log in to a cluster, follow these steps:

  1. In the Google Cloud console, go to the GKE clusters page.

    Go to GKE clusters

  2. In the list of clusters, click Actions next to the registered cluster, and then click Login.

  3. Select Use your Google identity to log in.

  4. Click Login.

Log in using a bearer token

You can log in to registered clusters by using a bearer token. Many kinds of bearer tokens, as specified in Kubernetes Authentication, are supported. The easiest method is to create a Kubernetes service account (KSA) in the cluster, and use its bearer token to log in.

All accounts that log in to a cluster need to hold at least the following Kubernetes RBAC roles in the cluster:

Before you begin

You or your platform administrator need to complete the following step once per registered cluster.

Create and apply the cloud-console-reader RBAC role

Users who want to view your cluster's resources in the Google Cloud console need to have the relevant permissions to do so. You define this set of permissions by creating a ClusterRole RBAC resource, cloud-console-reader, in the cluster.

cloud-console-reader grants its users the get, list, and watch permissions on the cluster's nodes, persistent volumes, pods and storage classes, which allow them to see details about these resources. You can then bind this ClusterRole to the user's service account, as described in the next section.

kubectl

To create the cloud-console-reader ClusterRole and apply it, run the following command:

cat <<EOF > cloud-console-reader.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cloud-console-reader
rules:
- apiGroups: [""]
  resources: ["nodes", "persistentvolumes", "pods"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
EOF
kubectl apply -f cloud-console-reader.yaml

Setting up a service account

We recommend that you or a cluster administrator create a service account for each user logging in to the cluster. Using a bearer token is like using a password, so each user should have their own account. Logging in with the KSA's bearer token causes all operations to be executed as the KSA, restricted by the RBAC roles held by the KSA.

The KSA needs to hold at least the following RBAC roles in the cluster:

Create and authorize a KSA

kubectl

To create a KSA and bind permissions to it, follow these steps:

  1. Create the KSA and ClusterRoleBinding resources to bind the view and cloud-console-reader Kubernetes RBAC ClusterRoles to the KSA:

    KSA_NAME=KSA_NAME
    kubectl create serviceaccount ${KSA_NAME}
    kubectl create clusterrolebinding VIEW_BINDING_NAME \
    --clusterrole view --serviceaccount default:${KSA_NAME}
    kubectl create clusterrolebinding CLOUD_CONSOLE_READER_BINDING_NAME \
    --clusterrole cloud-console-reader --serviceaccount default:${KSA_NAME}
    

    Replace the following:

    • KSA_NAME: the name that you choose for the KSA
    • VIEW_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can name it anything you want, but you might find it easiest to name it after the KSA
    • CLOUD_CONSOLE_READER_BINDING_NAME: the name that you choose for the cloud-console-reader ClusterRoleBinding resource; you can also name this anything you want
  2. Depending on what access the service account should have, bind additional roles to the KSA. For options, see the Kubernetes default roles.

    For example, if you want to deploy a Kubernetes application from Cloud Marketplace, bind the cluster-admin role to the KSA:

    kubectl create clusterrolebinding BINDING_NAME \
    --clusterrole cluster-admin --serviceaccount default:KSA_NAME
    

    Replace BINDING_NAME with the name of the cluster role binding for the service account.

Authorize other accounts

kubectl

For every other user or service account gaining access to the cluster, create ClusterRoleBinding resources to bind the view and cloud-console-reader roles to their account:

  1. Bind the view and cloud-console-reader ClusterRoles:

    ACCOUNT_NAME=ACCOUNT_NAME
    kubectl create clusterrolebinding VIEW_BINDING_NAME \
    --clusterrole view --serviceaccount default:${ACCOUNT_NAME}
    kubectl create clusterrolebinding CLOUD_CONSOLE_READER_BINDING_NAME \
    --clusterrole cloud-console-reader --serviceaccount default:${ACCOUNT_NAME}
    

    Replace the following:

    • ACCOUNT_NAME: the Kubernetes service account
    • VIEW_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can name it anything you want, but you might find it easiest to name it after the user or service account
    • CLOUD_CONSOLE_READER_BINDING_NAME: the name that you choose for the view ClusterRoleBinding resource; you can also name this anything you want
  2. Bind additional roles, depending on what access the account should have. For options, see the Kubernetes default roles.

    For example, to bind the cluster-admin role, run the following command:

    kubectl create clusterrolebinding BINDING_NAME \
    --clusterrole cluster-admin --serviceaccount default:ACCOUNT_NAME
    

    Replace BINDING_NAME with the name of the cluster role binding for the service account.

Get the KSA's bearer token

kubectl

To acquire the KSA's bearer token, run the following command:

SECRET_NAME=KSA_NAME-token

kubectl apply -f - << __EOF__
apiVersion: v1
kind: Secret
metadata:
  name: "${SECRET_NAME}"
  annotations:
    kubernetes.io/service-account.name: "${KSA_NAME}"
type: kubernetes.io/service-account-token
__EOF__

until [[ $(kubectl get -o=jsonpath="{.data.token}" "secret/${SECRET_NAME}") ]]; do
  echo "waiting for token..." >&2;
  sleep 1;
done

kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode

Replace KSA_NAME with the name that you choose for the KSA.

From this command's output, copy the token and save it for use in the next section.

Logging in to a cluster

Console

To use the token to log in to a cluster, follow these steps:

  1. In the Google Cloud console, either:

    • In the GKE Clusters page, click Actions next to the registered cluster, then click Login.

      Go to GKE Clusters

    or:

    • In the Anthos Clusters page, select the cluster you want to log in to in the list of clusters, then click Login in the information panel that displays.

      Go to Anthos Clusters

  2. Select Token, and then fill in the Token field with the KSA's bearer token.

  3. Click Login.

Logging in using basic authentication

Console

To use basic authentication to log in to a cluster, follow these steps:

  1. In the Google Cloud console, either:

    • In the GKE Clusters page, click Actions next to the registered cluster, then click Login.

      Go to GKE Clusters

    or:

    • In the Anthos Clusters page, select the cluster you want to log in to in the list of clusters, then click Login in the information panel that displays.

      Go to Anthos Clusters

  2. Select Basic authentication, and then fill in the Username and Password fields.

  3. Click Login.

Logging in using OpenID Connect (OIDC)

If your cluster is configured to use an OIDC identity provider, you can use this to authenticate to the cluster from the Google Cloud console. You can find out how to set up OIDC for Anthos clusters in the following guides:

Console

To use OIDC to log in to a configured cluster, follow these steps:

  1. In the Google Cloud console, either:

    • In the GKE Clusters page, click Actions next to the registered cluster, then click Login.

      Go to GKE Clusters

    or:

    • In the Anthos Clusters page, select the cluster you want to log in to in the list of clusters, then click Login in the information panel that displays.

      Go to Anthos Clusters

  2. Select Authenticate with identity provider configured for the cluster. You are redirected to your identity provider, where you might need to log in or consent to the Google Cloud console accessing your account.

  3. Click Login.

What's next