Learn how to configure GKE on AWS to use OpenID Connect (OIDC) for authentication to user clusters. This topic covers the process to configure GKE on AWS with any OpenID provider.
To upgrade a cluster that uses OIDC authentication to Kubernetes 1.21, see Upgrade to 1.21.
For an overview of the GKE on AWS authentication flow, see Authentication.
Overview
GKE on AWS supports OIDC as one of the authentication mechanisms for interacting with a user cluster's Kubernetes API server. With OIDC, you can manage access to Kubernetes clusters by using the standard procedures in your organization for creating, enabling, and disabling user accounts.
Before you begin
This topic assumes you are familiar with the following authentication and OpenID concepts:
The Google Cloud CLI must be installed on each developer's local machine.
Headless systems are unsupported. To authenticate, you must open a browser on the local machine running the gcloud CLI. The browser then prompts you to authorize your user account.
To authenticate through the Google Cloud console, each cluster that you want to configure for OIDC authentication must be registered with Google Cloud.
Personas
This topic refers to three personas:
Organization administrator: This person chooses an OpenID provider and registers client applications with the provider.
Cluster administrator: This person creates one or more user clusters and creates authentication configuration files for developers who use the clusters.
Developer: This person runs workloads on one or more clusters and uses OIDC to authenticate.
Choose an OpenID provider
This section is for organization administrators.
You can use any OpenID provider of your choice. For a list of certified providers, see OpenID Certification.
Create redirect URLs
This section is for organization administrators.
The OpenID provider uses redirect URLs to return ID tokens. You must create redirect URLs for both the gcloud CLI and the Google Cloud console.
Set the gcloud CLI redirect URL
When you configure your OpenID provider, specify your CLI redirect
URL as
http://localhost:PORT/callback
Replace PORT with your port number greater than 1024.
Set the Google Cloud console redirect URL
The redirect URL for the Google Cloud console is:
https://console.cloud.google.com/kubernetes/oidc
When you configure your OIDC provider, specify
https://console.cloud.google.com/kubernetes/oidc
as one of your redirect URLs.
Register your client applications with the OpenID provider
This section is for organization administrators.
Before your developers can use the Google Cloud CLI or the Google Cloud console with your OpenID provider, you need to register those two clients with the OpenID provider. Registration includes these steps:
Learn your provider's issuer URI. The gcloud CLI or Google Cloud console sends authentication requests to this URI.
Configure your provider with the redirect URL, including your port number, for the gcloud CLI.
Configure your provider with the redirect URL for the Google Cloud console,
https://console.cloud.google.com/kubernetes/oidc
.Create a single client ID that your provider uses to identify both the Google Cloud CLI and the Google Cloud console.
Create a single client secret that the gcloud CLI and the Google Cloud console use to authenticate to the OpenID provider.
Create a custom scope that the gcloud CLI or Google Cloud console can use to request the user's security groups.
Create a custom claim name that the provider uses to return the user's security groups.
Configure your cluster
This section is for cluster administrators.
To configure OIDC authentication, you need to configure your user cluster's AWSCluster resource with authentication details for a cluster. Details from the AWSCluster are used to configure OIDC for both the Google Cloud console and the Authentication Plugin for GKE Enterprise. The configuration includes the following OIDC information:
authentication: awsIAM: adminIdentityARNs: - AWS_IAM_ARN oidc: - certificateAuthorityData: CERTIFICATE_STRING clientID: CLIENT_ID clientSecret: CLIENT_SECRET extraParams: EXTRA_PARAMS groupsClaim: GROUPS_CLAIM groupPrefix: GROUP_PREFIX issuerURI: ISSUER_URI kubectlRedirectURI: KUBECTL_REDIRECT_URI scopes: SCOPES userClaim: USER_CLAIM userPrefix: USER_PREFIX
Authentication fields
The following table describes the fields of the
authentication.awsIAM.adminIdentityARNs
object.
Field | Required | Description | Format |
---|---|---|---|
adminIdentityARNs | Yes, if configuring OIDC. | Amazon Resource Name (ARN) of the AWS IAM identities (users or roles)
granted cluster administrator access by GKE on AWS.
Example: arn:aws:iam::123456789012:group/Developers |
String |
Field | Required | Description | Format |
---|---|---|---|
certificateAuthorityData | No | A base64-encoded
PEM-encoded certificate for the OIDC provider. To create the string,
encode the certificate, including headers, into base64. Include the resulting
string in certificateAuthorityData as a single line. Example:
certificateAuthorityData: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tC...k1JSUN2RENDQWFT== |
String |
clientID | Yes | ID for the client application that makes authentication requests to the OpenID provider. | String |
clientSecret | No | Shared secret between OIDC client application and OIDC provider. | String |
extraParams | No |
Additional key-value parameters to send to the OpenID provider. If you are
authorizing a group, pass in resource=token-groups-claim .
If your authorization server prompts for consent, for authentication with
Microsoft Azure and Okta, set |
Comma-delimited list |
groupsClaim | No | JWT claim that the provider uses to return your security groups. | String |
groupPrefix | No | Prefix prepended to group claims to prevent clashes with existing names.
For example, If you have two groups named foobar add a prefix
gid- , the result group is gid-foobar . |
String |
issuerURI | Yes | URL where authorization requests are sent to your OpenID, such as
https://example.com/adfs . The Kubernetes API server uses this URL
to discover public keys for verifying tokens. The URI must use HTTPS. |
URL String |
kubectlRedirectURI | Yes | The redirect url `kubectl` uses for authorization. | URL String |
scopes | Yes | Additional scopes to send to the OpenID provider. Microsoft Azure and Okta
require the offline_access scope. |
Comma-delimited list |
userClaim | No | JWT claim to use as the username. You can choose other claims, such as email or name, depending on the OpenID provider. However, claims other than email are prefixed with the issuer URL to prevent naming clashes. | String |
userPrefix | No | Prefix prepended to username claims to prevent clashes with existing names. | String |
Example: Authorizing users and groups
Many providers encode user-identifying properties, such as email and user IDs, in a token. However, these properties have implicit risks for authentication policies:
- User IDs can make policies difficult to read and audit.
- Using email addresses can create both an availability risk (if a user changes their primary email) and a security risk (if an email can be re-assigned).
Instead of assigning user IDs, we recommend group policies, which can be both persistent and easier to audit.
Suppose your provider creates identity tokens that include the following fields:
{ 'iss': 'https://server.example.com' 'sub': 'u98523-4509823' 'groupList': ['developers@example.corp', 'us-east1-cluster-admins@example.corp'] ... }
Given this token format, you'd populate your configuration file's
oidc
specification like so:
issuerURL: 'https://server.example.com' username: 'sub' usernamePrefix: 'uid-' group: 'groupList' groupPrefix: 'gid-' extraParams: 'resource=token-groups-claim' ...
After you've created your user cluster, you use Kubernetes role-based access control (RBAC) to grant privileged access to the authenticated users.
In the following example, you create a ClusterRole
that grants its users read-
only access to the cluster's Secrets, and create a ClusterRoleBinding
resource
to bind the role to the authenticated group.
Define a
ClusterRole
. Copy the following YAML into a file namedsecret-reader-role.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secret-reader rules: - apiGroups: [""] # The resource type for which access is granted resources: ["secrets"] # The permissions granted by the ClusterRole verbs: ["get", "watch", "list"]
Define a
ClusterRoleBinding
. Copy the following YAML into a file namedsecret-reader-admins.yaml
.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: read-secrets-admins subjects: # Allows anyone in the "us-east1-cluster-admins" group to # read Secrets in any namespace within this cluster. - kind: Group name: gid-us-east1-cluster-admins # Name is case sensitive apiGroup: rbac.authorization.k8s.io # Allows this specific user to read Secrets in any # namespace within this cluster - kind: User name: uid-u98523-4509823 apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io
Apply
secret-reader-role.yaml
andsecret-reader-admins.yaml
to your cluster withkubectl
.env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f secret-reader-role.yaml && \ kubectl apply -f secret-reader-admins.yaml
Users granted access in
read-secrets-admins
now have access to read Secrets in your cluster.
Create a login config
This section is for cluster administrators.
After you create a user cluster, you need to generate a configuration file
for the cluster using gcloud anthos create-login-config
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with your user cluster name.Create the configuration with
gcloud anthos
.gcloud anthos create-login-config --kubeconfig usercluster-kubeconfig
Replace usercluster-kubeconfig with the path to your user cluster's
kubeconfig
file. On Linux and macOS, by default this file is at~/.kube/config
.
This command generates a file (kubectl-anthos-config.yaml
) containing the
configuration information your developers use to authenticate to the
cluster with the gcloud CLI. You should not modify this file.
To understand more about the contents of kubectl-anthos-config.yaml
, see
the appendix.
Distribute the login config
Distribute the config file to users that need to authenticate to your user clusters. You can distribute the config by:
- Placing the file in the default directory.
- Securely distributing the file.
- Hosting the file on an HTTPS server.
Login config default directories
The default locations for storing the configuration file for each OS are as follows:
- Linux
$HOME/.config/google/anthos/kubectl-anthos-config.yaml
, where$HOME
is the user's home directory.- macOS
$HOME/Library/Preferences/google/anthos/kubectl-anthos-config.yaml
, where$HOME
is the user's home directory.- Windows
%APPDATA%/google/anthos/kubectl-anthos-config.yaml
, where%APPDATA%
is the user's application data directory.
After the login config has been distributed, your developers are ready to configure the gcloud CLI to access the cluster.
Modify your cluster after upgrading to Kubernetes 1.21
After you upgrade your cluster to Kubernetes 1.21, you need to configure GKE Identity Service and remove your OIDC information from your cluster's configuration. To update the configuration, perform the following steps:
Follow the steps in Upgrade your cluster.
From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with your user cluster name.Open the manifest that contains the AWSCluster in a text editor. Keep the file open and use the values of the
oidc
object to follow the steps in Configuring clusters for GKE Identity Service.From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Open the YAML file that created your AWSCluster in a text editor. If you do not have your initial YAML file, you can use
kubectl edit
.Edit YAML
If you followed the instructions in Creating a user cluster, your YAML file is named
cluster-0.yaml
. Open this file in a text editor.kubectl edit
To use
kubectl edit
to edit your AWSCluster, run the following command:env HTTPS_PROXY=http://localhost:8118 \ kubectl edit awscluster cluster-name
Replace cluster-name with your AWSCluster. For example, to edit the default cluster,
cluster-0
, run the following command:env HTTPS_PROXY=http://localhost:8118 \ kubectl edit awscluster cluster-0
Delete the
oidc
object from your cluster's manifest.Save the file. If you are using
kubectl edit
,kubectl
applies the changes automatically. If you are editing the YAML file, apply it to your management service with the following command:env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f cluster-0.yaml
The management service then updates your AWSCluster.
Configuring gcloud to access your cluster
This section is for developers or cluster administrators.
Prerequisites
To complete this section, you must complete the following:
- A login config.
An updated version of the gcloud CLI with the
anthos-auth
components.gcloud components update gcloud components install anthos-auth
Verify that the gcloud CLI was installed successfully by running the following command, which should respond with details about the required arguments and available options.
gcloud anthos auth
Authenticate to you cluster
You can authenticate to your cluster the following ways:
- With the gcloud CLI on your local machine.
- With the gcloud CLI on a remote machine using an SSH tunnel.
With Connect on the Google Cloud console.
gcloud local
Use gcloud anthos auth login
to authenticate to your cluster with your login
config.
If you placed the login config in the default location and have the cluster
name configured, you can use gcloud anthos auth login
with no options. You can
also configure the cluster, user, and other authentication details with optional
parameters.
Default
gcloud anthos auth login --cluster CLUSTER_NAME
Replace CLUSTER_NAME with a fully qualified cluster name. For
example,
projects/my-gcp-project/locations/global/memberships/cluster-0-0123456a
.
Optional parameters
gcloud anthos auth login
supports the following optional parameters:
gcloud anthos auth login --cluster CLUSTER_NAME \
--user USERNAME --login-config ANTHOS_CONFIG_YAML \
--login-config-cert LOGIN_CONFIG_CERT_PEM \
--kubeconfig=KUBECONFIG --dry-run
The parameters are described in the following table.
Parameter | Description |
---|---|
cluster | The name of the cluster to authenticate to. Defaults to the cluster in `kubectl-anthos-config.yaml`. |
user | Username for credentials in kubeconfig . Defaults to
{cluster-name}-anthos-default-user . |
login-config | Either the path to the configuration file generated by the cluster admin
for the developer or a URL hosting the file. Defaults to
kubectl-anthos-config.yaml . |
login-config-cert | If using a URL for login-config, the path to the CA certificate file for making HTTPS connections. |
kubeconfig | Path to the kubeconfig file that contains tokens. Defaults to
$HOME/.kube/config` . |
dry-run | Test your command-line options without changing your configuration or cluster. |
The gcloud anthos login
command launches a browser that asks the user to
log in with their enterprise credentials, performs the OIDC credential
exchange, and acquires the relevant tokens. The gcloud CLI then
writes the tokens to a kubeconfig
file. kubectl
uses this file to
authenticate to the user cluster.
To verify that the authentication was successful, run any kubectl
command with
your kubeconfig
file:
env HTTPS_PROXY=http://localhost:8118 \
kubectl get nodes --kubeconfig my.kubeconfig
gcloud tunnel
If you want to authenticate to a user cluster from a remote machine, you can perform the authentication using an SSH tunnel. To use a tunnel, your authentication configuration file must be on the remote machine, and you must be able to reach your Open ID provider from your local machine.
On your local machine, run the following command:
ssh USERNAME@REMOTE_MACHINE -L LOCAL_PORT:localhost:REMOTE_PORT
Replace the following:
USERNAME with a user that has SSH access to the remote machine.
REMOTE_MACHINE with the remote machine's hostname or IP address.
LOCAL_PORT is an available port on your local machine that
ssh
uses to tunnel to the remote machine.REMOTE_PORT is the port that you configured for your OIDC redirect URL. The port number is part of the
kubectlRedirectURI
field of your authentication configuration file.
In your SSH shell, run the following command to initiate authentication:
gcloud anthos auth login --login-config AUTH_CONFIG_FILE
Replace AUTH_CONFIG_FILE with the path of your authentication configuration file on the remote machine. The gcloud CLI runs a web server on the remote machine.
On your local machine, in a browser, go to http://localhost:LOCAL_PORT/login and follow the OIDC login flow.
The kubeconfig
file on your remote machine now has the token to access the
user cluster.
In your SSH shell, verify that you have access to the user cluster:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get nodes
Console
You can authenticate with the Google Cloud console, initiate the authentication flow from the Kubernetes clusters page in the Google Cloud console:
-
Open the Google Cloud console:
-
Locate your GKE on AWS cluster in the list and then click Login.
-
Select Authenticate with the Identity Provider configured for the cluster, and then click LOGIN.
You are redirected to your identity provider, where you might need to log in or consent to the Google Cloud console accessing your account. You are then redirected back to the Kubernetes clusters page on the Google Cloud console.
Updating OIDC configuration
To update the OIDC configuration on your cluster, use the kubectl edit
command.
env HTTPS_PROXY=http://localhost:8118 \
kubectl edit clientconfigs -n kube-public default
The kubectl
tool loads the ClientConfig resource in your default editor. To
update the configuration, save the file. The kubectl
tool updates the
ClientConfig resource on your cluster.
For information on the contents of the ClientConfig resource, see the following section.
Appendix: Example login config
An example kubectl-anthos-config.yaml
follows. This example is included for
understanding its contents. You should always generate the file with
gcloud anthos create-login-config
.
apiVersion: authentication.gke.io/v2alpha1 kind: ClientConfig metadata: name: default namespace: kube-public spec: authentication: - name: oidc oidc: clientID: CLIENT_CONFIG clientSecret: CLIENT_SECRET extraParams: resource=k8s-group-claim,domain_hint=consumers certificateAuthorityData: CERTIFICATE_STRING issuerURI: https://adfs.contoso.com/adfs kubectlRedirectURI: http://redirect.kubectl.com/ scopes: allatclaim,group userClaim: "sub" groupsClaim: "groups" proxy: PROXY_URL #Optional certificateAuthorityData: CERTIFICATE_AUTHORITY_DATA name: projects/my-project/locations/global/membership/cluster-0 server: https://192.168.0.1:PORT preferredAuthentication: oidc
For explanations of the field contents, see Authentication fields.
What's next
Deploy your first workload to GKE on AWS.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-26 UTC.