Set up user access for GKE Identity Service
This document is for cluster administrators who have already configured clusters for GKE Identity Service following the instructions in Configure clusters for fleet-level GKE Identity Service or Configure clusters for GKE Identity Service with OIDC. It tells you how to set up (and restrict) access to your configured cluster for your organization's developers and other cluster users.
Authenticate using file access
After you have configured a cluster, you need to generate a login configuration file and distribute it to cluster users. This file lets users log in to the cluster from the command line with your chosen provider, as described in Access clusters with GKE Identity Service.
With OIDC providers only, users can also log in to the cluster from the Google Cloud console without a login file, as described in Work with clusters from the Google Cloud console.
Generate the login config
Console
(Fleet-level setup only)
Copy the displayed gcloud
command and run it to generate the file.
gcloud
If you configured the cluster using gcloud
CLI or if you need to generate the
file again, run the following command to generate the file:
gcloud anthos create-login-config --kubeconfig=KUBECONFIG
where KUBECONFIG
is the path to the kubeconfig file for the cluster. If there are multiple contexts in the kubeconfig, the current context is used. You may need to reset the current context to the correct cluster before running the command.
You can see complete reference details for this command, including additional optional parameters, in the Google Cloud CLI reference guide.
The default name for the login config file is kubectl-anthos-config.yaml
, which is the name the Google Cloud CLI expects when using the file to log in. If you want to change this to a non-default name, see the relevant section in Distribute the login config.
For troubleshooting information related to user access, see Troubleshoot user access issues.
Distribute the login config
The following are some possible approaches to distributing the config file:
Host the file at an accessible URL. Users can specify this location with the
--login-config
flag when runninggcloud anthos auth login
, allowing the Google Cloud CLI to get the file.Consider hosting the file on a secure host. See the
--login-config-cert
flag of the gcloud CLI for more information about using PEM certificates for secure HTTPS access.Manually provide the file to each user, with information on where to save it on their local machine—the Google Cloud CLI expects to find the file in an OS-specific default location. If the file has a non-default name or location, your users must use the
--login-config
flag to specify the config file location when running commands against the cluster. Instructions for users to save the file are in Access clusters with GKE Identity Service.Use your internal tools to push the authentication configuration file onto each user's machine. The Google Cloud CLI expects to find the file in the following locations, depending on the user OS:
Linux
$HOME/.config/google/anthos/kubectl-anthos-config.yaml
macOS
$HOME/Library/Preferences/google/anthos/kubectl-anthos-config.yaml
Windows
%APPDATA%\google\anthos\kubectl-anthos-config.yaml
Authenticate using FQDN access (recommended)
Instead of distributing the configuration file to all users, you can set up a user login access by using FQDN access. Users can authenticate directly on the GKE Identity Service server with a fully qualified domain name (FQDN). For SAML providers, user login access is supported only through this authentication process.
Share FQDN with users
Instead of a configuration file, cluster administrators can share the FQDN of their GKE Identity Service server with users. Users can use this FQDN to log in to the cluster. The URL format for login is APISERVER-URL, where the URL contains the FQDN of the cluster server.
An example format of an APISERVER-URL is https://cluster.company.com
.
User login access using trusted SNI certificates
SNI certificates simplify cluster access by leveraging trusted certificates
already present on corporate devices. Administrators use this certificate at the
time of cluster creation. As a user, you need to use the FQDN provided by your
administrator to log in to the cluster. Alternatively, you can use a secure
kubeconfig
file where the token is stored after successful authentication.
Before you run the following command, ensure that the certificate used by the GKE Identity Service server is trusted by the device on which the user login activity is done.
gcloud anthos auth login --server APISERVER-URL --kubeconfig OUTPUT_FILE
Replace the following:
- APISERVER-URL: FQDN of the GKE Identity Service server.
- OUTPUT_FILE: Use this flag if your
kubeconfig
file resides in a location other than the default. If this flag is omitted, authentication tokens are added to thekubeconfig
file in the default location. For example:--kubeconfig /path/to/custom.kubeconfig
.
User login access using cluster CA-issued certificates
As a user, if you don't use a trusted SNI certificate at the cluster level, then the certificate used by the identity service is issued by the cluster's certificate authority (CA). Administrators distribute this CA certificate to users. Run the following command using the cluster's CA certificate to log in to your cluster:
gcloud anthos auth login --server APISERVER-URL --kubeconfig OUTPUT_FILE --login-config-cert CLUSTER_CA_CERTIFICATE
Configure Identity Service options
With this authentication approach, you have the option to configure the token
lifetime duration. In the ClientConfig CR, a new section IdentityServiceOptions
is introduced with a new parameter sessionDuration
. This allows users to
configure the token lifetime (in minutes). The sessionDuration
parameter has a
lower limit of 15 minutes and a maximum limit of 1440 minutes (24 hours).
Here's an example of what it looks like in the ClientConfig CR:
spec:
IdentityServiceOptions:
sessionDuration: INT
where INT is the session duration in minutes.
Set up role-based access control (RBAC)
Authentication is often combined with Kubernetes role-based access control (RBAC) to provide more finely grained access control to clusters for authenticated users and service accounts. Whenever possible, it is recommended to create RBAC policies that use group names instead of user identifiers. By linking your RBAC policies explicitly to groups, you can manage user access privileges entirely with your identity provider, so the cluster doesn't need to be updated every time user privileges change. Note that to configure access control based on membership of security groups with OIDC, you must ensure that GKE Identity Service is set up to support getting group membership information from your identity provider.
For example, if you wanted certain authenticated users to have access to the cluster's Pods, create a ClusterRole
that grants access to these resources, as in the following example:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pod-reader rules: - apiGroups: [""] # The resource type for which access is granted resources: ["pods"] # The permissions granted by the ClusterRole verbs: ["get", "watch", "list"]
You then create a corresponding ClusterRoleBinding
to grant the permissions in the ClusterRole
to the relevant users—in this case, members of the us-east1-cluster-admins
security group and the user with ID u98523-4509823
:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: read-pods-admins subjects: # Grants anyone in the "us-east1-cluster-admins" group # read access to Pods in any namespace within this cluster. - kind: Group name: gid-us-east1-cluster-admins # Name is case-sensitive apiGroup: rbac.authorization.k8s.io # Grants this specific user read access to Pods in any # namespace within this cluster - kind: User name: uid-u98523-4509823 apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io
In the following example, this ClusterRoleBinding
grants permissions in the ClusterRole
to the relevant group with ID 12345678-BBBb-cCCCC-0000-123456789012
. Note that this setting is relevant only for Azure AD providers and is available for Google Distributed Cloud Virtual for Bare Metal clusters.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pod-reader-binding subjects: # Retrieves group information for the group ID mentioned - kind: Group name: 12345678-BBBb-cCCCC-0000-123456789012 apiGroup: rbac.authorization.k8s.io
For more information about using RBAC, see Configure role-based access control and Using RBAC Authorization.
Create an RBAC role for Google Cloud console access
Users authenticated using OIDC providers can log in to clusters from the Google Cloud console as well as the command line.
Authenticated users who want to access a cluster's resources in the Google Cloud console
need to have the relevant Kubernetes permissions to do so. If you don't want to grant those users more extensive permissions, such as those of a cluster admin, you can create a custom RBAC role that includes the minimum permissions to view the cluster's nodes, persistent volumes, pods, and storage classes. You can define this set of
permissions by creating a ClusterRole
RBAC resource,
cloud-console-reader
, in the cluster.
cloud-console-reader
grants its users the get
, list
, and watch
permissions on the cluster's nodes, persistent volumes, pods and storage classes,
which allow them to see details about these resources.
kubectl
To create the cloud-console-reader
ClusterRole
and apply it to the cluster, run the
following command:
cat <<EOF > cloud-console-reader.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cloud-console-reader
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumes", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
EOF
kubectl apply -f cloud-console-reader.yaml
You can then grant this ClusterRole
to users when setting up your permission policies, as described in the previous section. Note that users also need IAM permissions to view clusters in the Google Cloud console.