Setting up user access for Anthos Identity Service

This document is for cluster administrators who have already configured clusters for Anthos Identity Service following the instructions in Configuring clusters for fleet-level Anthos Identity Service or Configuring clusters for Anthos Identity Service with OIDC. It tells you how to set up (and restrict) access to your configured cluster for your organization's developers and other cluster users.

Set up user login access

After you have configured a cluster, you need to generate a login configuration file and distribute it to cluster users. This file lets users log in to the cluster from the command line with your chosen provider, as described in Accessing clusters with Anthos Identity Service.

Users can also log in to the cluster from the Google Cloud Console without a login file, as described in Logging in to a cluster from the Cloud Console.

Generate the login config

Console

(Fleet-level setup only)

Copy the displayed gcloud command and run it to generate the file.

gcloud

If you configured the cluster using the CLI, or if you need to generate the file again, run the following create-login-config command to generate the file:

gcloud anthos create-login-config --kubeconfig=KUBECONFIG

...where KUBECONFIG is the path to the kubeconfig file for the cluster. If there are multiple contexts in the kubeconfig, the current context is used. You may need to reset the current context to the correct cluster before running the command.

You can see complete reference details for this command, including additional optional parameters, in the Cloud SDK reference guide.

The default name for the login config file is kubectl-anthos-config.yaml, which is the name the gcloud command-line tool expects when using the file to log in. If you want to change this to a non-default name, see the relevant section in Distribute the login config below.

Distribute the login config

The following are some possible approaches to distributing the config file:

  • Host the file at an accessible URL. Users can specify this location with the --login-config flag when running gcloud anthos auth login, allowing the gcloud command-line tool to get the file.

    Consider hosting the file on a secure host. See the --login-config-cert flag of the gcloud tool for more information about using PEM certificates for secure HTTPS access.

  • Manually provide the file to each user, with information on where to save it on their local machine—the gcloud command-line tool expects to find the file in an OS-specific default location. If the file has a non-default name or location, your users must use the --login-config flag to specify the config file location when running commands against the cluster. Instructions for users to save the file are in Accessing clusters with Anthos Identity Service.

  • Use your internal tools to push the authentication configuration file onto each user's machine. The gcloud command-line tool expects to find the file in the following locations, depending on the user OS:

    Linux

    $HOME/.config/google/anthos/kubectl-anthos-config.yaml

    macOS

    $HOME/Library/Preferences/google/anthos/kubectl-anthos-config.yaml

    Windows

    %APPDATA%\google\anthos\kubectl-anthos-config.yaml

(Optional) Set up role-based access control (RBAC)

Authentication is often combined with Kubernetes role-based access control (RBAC) to provide more finely grained access control to clusters for authenticated users and service accounts. Whenever possible, it is recommended to create RBAC policies that use group names instead of user identifiers. By linking your RBAC policies explicitly to groups, you can manage user access privileges entirely with your identity provider, so the cluster doesn't need to be updated every time user privileges change. Note that to configure access control based on membership of security groups with OIDC, you must ensure that Anthos Identity Service is set up to support getting group membership information from your identity provider.

For example, if you wanted certain authenticated users to have access to the cluster's Pods, create a ClusterRole that grants access to these resources, as in the following example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  # The resource type for which access is granted
  resources: ["pods"]
  # The permissions granted by the ClusterRole
  verbs: ["get", "watch", "list"]

You then create a corresponding ClusterRoleBinding to grant the permissions in the ClusterRole to the relevant users—in this case, members of the us-east1-cluster-admins security group and the user with ID u98523-4509823:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-pods-admins
subjects:
  # Grants anyone in the "us-east1-cluster-admins" group
  # read access to Pods in any namespace within this cluster.
- kind: Group
  name: gid-us-east1-cluster-admins # Name is case-sensitive
  apiGroup: rbac.authorization.k8s.io
  # Grants this specific user read access to Pods in any
  # namespace within this cluster
- kind: User
  name: uid-u98523-4509823
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

You can find out much more about using RBAC in Configuring role-based access control and Using RBAC Authorization.