This topic describes how to enable workload identity for your GKE on AWS workloads to control their access to AWS resources.
For information about using workload identity with Google Cloud Identity and Access Management (IAM) accounts to control access to GCP resources, see Using workload identity with Google Cloud.
Overview
Workload identity uses AWS IAM permissions to control access to cloud resources. With workload identity, you can assign different IAM roles to each workload. This fine-grained control of permissions lets you follow the principle of least privilege. Without workload identity, you must assign AWS IAM roles to your GKE on AWS nodes, giving all workloads on the node the same permissions as the node itself.
To enable workload identity for your cluster, complete the following steps, which are grouped by the administrative roles that perform them.
Cluster administrator
- Create a Cloud Storage bucket to store OIDC discovery data.
- Create an Identity and Access Management role to read from that bucket.
- Create a user cluster with workload identity enabled.
- Create a webhook on your cluster that applies workload identity credentials to Pods on creation. If you don't want to use the webhook, you can manually set environment variables in your pods.
- Configure the AWS OIDC provider.
- Create AWS IAM roles and policies.
- Create Kubernetes service accounts, and bind AWS policies to them.
Prerequisites
To complete the steps in this document, you must have the following setup:
- An GKE on AWS management service.
User clusters that are running a Kubernetes version greater than 1.17.9.
The following permissions and tools.
Permissions
To create a cluster with workload identity enabled, you need the following permissions:
Google Cloud
- Create a publicly readable Cloud Storage bucket with uniform bucket-level access enabled.
- Grant
management-sa@PROJECT_NAME.iam.gserviceaccount.com
read/write permissions to the bucket.
AWS
- Create an AWS OIDC provider
- Create AWS IAM roles
Tools
On your local machine, we recommend having the
jq
tool installed.
Creating the OIDC discovery bucket
This section is for cluster administrators.
Your user cluster needs to store the OIDC discovery data in a publicly accessible Cloud Storage bucket. The bucket includes OIDC discovery configuration and public keys. AWS uses the contents to authenticate requests from your user clusters.
Your bucket must have the following attributes:
- Be publicly readable.
- Have uniform bucket-level access enabled.
If you don't have a bucket with these attributes, create one by using the
following gcloud storage
commands:
BUCKET=BUCKET_NAME
gcloud storage buckets create gs://${BUCKET} --uniform-bucket-level-access
gcloud storage buckets add-iam-policy-binding gs://${BUCKET} \
--member=allUsers --role=roles/storage.objectViewer
Replace BUCKET_NAME
with the name of your new bucket.
Grant the management service account permissions
The Identity and Access Management service account for the GKE on AWS management service needs permissions to read and write objects into this bucket.
Grant your management service account permissions by using the following command.
MANAGEMENT_SA=management-sa@PROJECT_NAME.iam.gserviceaccount.com gcloud storage buckets add-iam-policy-binding gs://${BUCKET} \ --member=serviceAccount:${MANAGEMENT_SA} \ --role=roles/storage.admin
Replace
PROJECT_NAME
with your Google Cloud project.Create a new IAM role with permissions to manage this bucket. To create the role, first save the role definition to a file, then create the role and bind the role to your management service account.
To complete these steps, run the following commands:
cat << EOF > anthos-oidc-role.yaml title: anthosAwsOidcStorageAdmin description: permissions to manage the OIDC buckets stage: GA includedPermissions: - storage.buckets.get EOF gcloud iam roles create anthosAwsOidcStorageAdmin --project=PROJECT_NAME \ --file=anthos-oidc-role.yaml gcloud projects add-iam-policy-binding \ PROJECT_NAME \ --member=serviceAccount:${MANAGEMENT_SA} \ --role=projects/PROJECT_NAME/roles/anthosAwsOidcStorageAdmin
Replace
PROJECT_NAME
with your Google Cloud project.The Google Cloud CLI confirms that the policy binding is created.
Creating a user cluster
This section is for cluster administrators.
Create a user cluster with workload identity enabled
Create a user cluster that contains details about your OIDC discovery bucket. You
set this information in your AWSCluster
's
spec.controlPlane.workloadIdentity.oidcDiscoveryGCSBucket
field.
In this example, you create a cluster manually from AWSCluster
and
AWSNodePool
CRDs.
Change to the directory with your GKE on AWS configuration. You created this directory when Installing the management service.
cd anthos-aws
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Open a text editor and copy the following
AWSCluster
definition into a file namedcustom-cluster.yaml
.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSCluster metadata: name: CLUSTER_NAME spec: region: AWS_REGION networking: vpcID: VPC_ID podAddressCIDRBlocks: POD_ADDRESS_CIDR_BLOCKS serviceAddressCIDRBlocks: SERVICE_ADDRESS_CIDR_BLOCKS ServiceLoadBalancerSubnetIDs: SERVICE_LOAD_BALANCER_SUBNETS controlPlane: version: CLUSTER_VERSION # Latest version is 1.25.5-gke.2100 instanceType: AWS_INSTANCE_TYPE keyName: SSH_KEY_NAME subnetIDs: - CONTROL_PLANE_SUBNET_IDS securityGroupIDs: - CONTROL_PLANE_SECURITY_GROUPS iamInstanceProfile: CONTROL_PLANE_IAM_ROLE rootVolume: sizeGiB: ROOT_VOLUME_SIZE volumeType: ROOT_VOLUME_TYPE # Optional iops: ROOT_VOLUME_IOPS # Optional kmsKeyARN: ROOT_VOLUME_KEY # Optional etcd: mainVolume: sizeGiB: ETCD_VOLUME_SIZE volumeType: ETCD_VOLUME_TYPE # Optional iops: ETCD_VOLUME_IOPS # Optional kmsKeyARN: ETCD_VOLUME_KEY # Optional databaseEncryption: kmsKeyARN: ARN_OF_KMS_KEY hub: # Optional membershipName: ANTHOS_CONNECT_NAME cloudOperations: # Optional projectID: YOUR_PROJECT location: GCP_REGION enableLogging: ENABLE_LOGGING enableMonitoring: ENABLE_MONITORING workloadIdentity: # Optional oidcDiscoveryGCSBucket: WORKLOAD_IDENTITY_BUCKET
Replace the following:
- CLUSTER_NAME: the name of your cluster.
AWS_REGION: the AWS region where your cluster runs.
VPC_ID: the ID of the VPC where your cluster runs.
POD_ADDRESS_CIDR_BLOCKS: the range of IPv4 addresses used by the cluster's pods. Currently only a single range is supported. The range must not overlap with any subnets reachable from your network. It is safe to use the same range across multiple different AWSCluster objects. For example,
10.2.0.0/16
.SERVICE_ADDRESS_CIDR_BLOCKS: the range of IPv4 addresses used by the cluster's services. Currently only a single range is supported. The range must not overlap with any subnets reachable from your network. It is safe to use the same range across multiple different AWSCluster objects. For example,
10.1.0.0/16
.SERVICE_LOAD_BALANCER_SUBNETS: the subnet IDs where GKE on AWS can create public or private load balancers.
CLUSTER_VERSION: a Kubernetes version supported by GKE on AWS. The most recent version is 1.25.5-gke.2100.
AWS_INSTANCE_TYPE: a supported EC2 instance type.
SSH_KEY_NAME: an AWS EC2 key pair.
CONTROL_PLANE_SUBNET_IDS: the subnet IDs in the AZs where your control plane instances run.
CONTROL_PLANE_SECURITY_GROUPS: a securityGroupID created during the management service installation. You can customize this by adding any securityGroupIDs required to connect to the control plane.
CONTROL_PLANE_IAM_PROFILE: name of the AWS EC2 instance profile assigned to control plane replicas.
ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.
ROOT_VOLUME_TYPE with the EBS volume type. For example,
gp3
.ROOT_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when
volumeType
isGP3
. For more information, see General Purpose SSD volumes (gp3).ROOT_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane instance root volumes.
ETCD_VOLUME_SIZE: the size of volumes used by etcd.
ETCD_VOLUME_TYPE with the EBS volume type. For example,
gp3
.ETCD_VOLUME_IOPS with the amount of provisioned IO operations per second (IOPS) for the volume. Only valid when
volumeType
isgp3
. For more information, see General Purpose SSD volumes (gp3).ETCD_VOLUME_KEY with the Amazon Resource Name of the AWS KMS key that encrypts your control plane etcd data volumes.
ARN_OF_KMS_KEY: the AWS KMS key used to encrypt cluster Secrets.
ANTHOS_CONNECT_NAME: the Connect membership name used to register your cluster. The membership name must be unique. For example,
projects/YOUR_PROJECT/locations/global/memberships/CLUSTER_NAME
, whereYOUR_PROJECT
is your Google Cloud project andCLUSTER_NAME
is a unique name in your project. This field is optional.YOUR_PROJECT: your project ID.
GCP_REGION: the Google Cloud region where you want to store logs. Choose a region that is near the AWS region. For more information, see Global Locations - Regions & Zones — for example,
us-central1
.ENABLE_LOGGING:
true
orfalse
, whether Cloud Logging is enabled on control plane nodes.ENABLE_MONITORING:
true
orfalse
, whether Cloud Monitoring is enabled on control plane nodes.WORKLOAD_IDENTITY_BUCKET: the Cloud Storage bucket name containing your workload identity discovery information. This field is optional.
Create one or more AWSNodePools for your cluster. Open a text editor and copy the following AWSCluster definition into a file named
custom-nodepools.yaml
.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: NODE_POOL_NAME spec: clusterName: AWSCLUSTER_NAME version: CLUSTER_VERSION # latest version is 1.25.5-gke.2100 region: AWS_REGION subnetID: AWS_SUBNET_ID minNodeCount: MINIMUM_NODE_COUNT maxNodeCount: MAXIMUM_NODE_COUNT maxPodsPerNode: MAXIMUM_PODS_PER_NODE_COUNT instanceType: AWS_NODE_TYPE keyName: KMS_KEY_PAIR_NAME iamInstanceProfile: NODE_IAM_PROFILE proxySecretName: PROXY_SECRET_NAME rootVolume: sizeGiB: ROOT_VOLUME_SIZE volumeType: VOLUME_TYPE # Optional iops: IOPS # Optional kmsKeyARN: NODE_VOLUME_KEY # Optional
Replace the following:
- NODE_POOL_NAME: a unique name for your AWSNodePool.
- AWSCLUSTER_NAME: your AWSCluster's name. For example,
staging-cluster
. - CLUSTER_VERSION: a supported GKE on AWS Kubernetes version.
- AWS_REGION: the same AWS region as your AWSCluster.
- AWS_SUBNET_ID: an AWS subnet in the same region as your AWSCluster.
- MINIMUM_NODE_COUNT: the minimum number of nodes in the node pool. See Scaling user clusters for more information.
- MAXIMUM_NODE_COUNT: the maximum number of nodes in the node pool.
- MAXIMUM_PODS_PER_NODE_COUNT: the maximum number of pods that GKE on AWS can allocate to a node.
- AWS_NODE_TYPE: an AWS EC2 instance type.
- KMS_KEY_PAIR_NAME: the AWS KMS key pair assigned to each node pool worker.
- NODE_IAM_PROFILE: the name of the AWS EC2 instance profile assigned to nodes in the pool.
- ROOT_VOLUME_SIZE: the size, in gibibyte (GiB), of your control plane root volumes.
- VOLUME_TYPE: the node's AWS
EBS volume type.
For example,
gp3
. - IOPS: the amount of provisioned IO operations per second (IOPS)
for volumes. Only valid when
volumeType
isgp3
. - NODE_VOLUME_KEY: the ARN of the AWS KMS key used to encrypt the volume. For more information, see Using a customer managed CMK to encrypt volumes.
Apply the manifests to your management service.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f custom-cluster.yaml env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f custom-nodepools.yaml
Create a kubeconfig
While your user cluster starts, you can create a kubeconfig
context for your
new user cluster. You use the context to authenticate to a user or management
cluster.
Use
anthos-gke aws clusters get-credentials
to generate akubeconfig
for your user cluster in~/.kube/config
.env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Replace CLUSTER_NAME with your cluster's name. For example,
cluster-0
.Use
kubectl
to authenticate to your new user cluster.env HTTPS_PROXY=http://localhost:8118 \ kubectl cluster-info
If your cluster is ready, the output includes the URLs for Kubernetes components within your cluster.
Viewing your cluster's status
The management service provisions AWS resources when you apply an
AWSCluster
or AWSNodePool
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
To list your clusters, use
kubectl get AWSClusters
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get AWSClusters
The output includes each cluster's name, state, age, version, and endpoint.
For example, the following output includes only one
AWSCluster
namedcluster-0
:NAME STATE AGE VERSION ENDPOINT cluster-0 Provisioning 2m41s 1.25.5-gke.2100 gke-xyz.elb.us-east-1.amazonaws.com
View your cluster's events
To see recent
Kubernetes Events
from your user cluster, use kubectl get events
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Run
kubectl get events
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get events
The output includes information, warning, and errors related to from your management service.
Creating the workload identity webhook
This section is for cluster administrators.
To provide workload identity credentials to your workloads with no additional configuration, you can optionally create a webhook on your user clusters. This webhook intercepts Pod creation requests and then makes the following AWS IAM information available as environment variables to the Pod:
AWS_ROLE_ARN
: the Amazon Resource Name (ARN) of the IAM roleaws-iam-token
: the token exchanged for AWS IAM credentialsAWS_WEB_IDENTITY_TOKEN_FILE
: the path where the token is stored
With these variables, your workloads can call the AWS command-line tool or SDK can access the resources granted to the AWS role.
Creating the webhook is optional. If you decide not to create the webhook, you need to set the environment variables listed previously in the Pod. For information about not using a webhook, see Applying credentials without the webhook.
Create YAML files for the webhook
To deploy the webhook, perform the following steps:
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Get the user cluster name with
kubectl
:env HTTPS_PROXY=http://localhost:8118 \ kubectl get awscluster
kubectl
lists all your user clusters. Choose the user cluster you created with workload identity enabled.Set the cluster's name in an environment variable.
CLUSTER_NAME=CLUSTER_NAME
Replace
CLUSTER_NAME
with the name of your cluster. For example,cluster-0
.Set environment variables for the workload identity Pod image and namespace.
IDENTITY_IMAGE=amazon/amazon-eks-pod-identity-webhook:ed8c41f WEBHOOK_NAMESPACE=workload-identity-webhook
Generate the webhook YAML manifest in a file named
aws-webhook.yaml
by performing the following steps:env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials ${CLUSTER_NAME} CLUSTER_CA=$(env HTTPS_PROXY=http://localhost:8118 \ kubectl config view --raw -o json | jq -r '.clusters[] | select(.name == "'$(kubectl config current-context)'") | .cluster."certificate-authority-data"') cat << EOF > aws-webhook.yaml apiVersion: v1 kind: Namespace metadata: name: ${WEBHOOK_NAMESPACE} --- apiVersion: v1 kind: ServiceAccount metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} rules: - apiGroups: [''] resources: ['secrets'] verbs: ['create'] - apiGroups: [''] resources: ['secrets'] verbs: ['get', 'update', 'patch'] resourceNames: - pod-identity-webhook --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-identity-webhook subjects: - kind: ServiceAccount name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pod-identity-webhook rules: - apiGroups: [''] resources: ['serviceaccounts'] verbs: ['get', 'watch', 'list'] - apiGroups: ['certificates.k8s.io'] resources: ['certificatesigningrequests'] verbs: ['create', 'get', 'list', 'watch'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pod-identity-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pod-identity-webhook subjects: - kind: ServiceAccount name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} --- apiVersion: apps/v1 kind: Deployment metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} spec: replicas: 1 selector: matchLabels: app: pod-identity-webhook template: metadata: labels: app: pod-identity-webhook spec: serviceAccountName: pod-identity-webhook containers: - name: pod-identity-webhook image: ${IDENTITY_IMAGE} imagePullPolicy: Always command: - /webhook - --in-cluster - --namespace=${WEBHOOK_NAMESPACE} - --service-name=pod-identity-webhook - --tls-secret=pod-identity-webhook - --annotation-prefix=eks.amazonaws.com - --token-audience=sts.amazonaws.com - --logtostderr volumeMounts: - name: webhook-certs mountPath: /var/run/app/certs readOnly: false volumes: - name: webhook-certs emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} annotations: prometheus.io/port: '443' prometheus.io/scheme: https prometheus.io/scrape: 'true' spec: ports: - port: 443 targetPort: 443 selector: app: pod-identity-webhook --- apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} webhooks: - name: pod-identity-webhook.amazonaws.com failurePolicy: Ignore sideEffects: 'None' admissionReviewVersions: ['v1beta1'] clientConfig: service: name: pod-identity-webhook namespace: ${WEBHOOK_NAMESPACE} path: /mutate caBundle: ${CLUSTER_CA} rules: - operations: ['CREATE'] apiGroups: [''] apiVersions: ['v1'] resources: ['pods'] EOF
The contents of
aws-webhook.yaml
are ready to apply to your cluster.
Apply the webhook to your user cluster
To apply the webhook to your user cluster, perform the following steps.
Apply the
aws-webhook.yaml
file to your user cluster.env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f aws-webhook.yaml
When you apply the manifest, the webhook Pod generates Kubernetes certificate signing requests (CSR). Approve all requests from
system:serviceaccount:${WEBHOOK_NAMESPACE}:pod-identity-webhook
withkubectl certificate approve
.env HTTPS_PROXY=http://localhost:8118 \ kubectl certificate approve $(env HTTPS_PROXY=http://localhost:8118 \ &&\ kubectl get csr -o \ jsonpath="{.items[?(@.spec.username==\"system:serviceaccount:${WEBHOOK_NAMESPACE}:pod-identity-webhook\")].metadata.name}")
Verify that there are no remaining unapproved CSRs.
Use
kubectl get csr
to check that all CSRs from the requestorsystem:serviceaccount:${WEBHOOK_NAMESPACE}:pod-identity-webhook
are approved:env HTTPS_PROXY=http://localhost:8118 \ kubectl get csr
Response:
NAME AGE REQUESTOR CONDITION csr-mxrt8 10s system:serviceaccount:default:pod-identity-webhook Approved,Issued
Configuring the AWS OIDC provider
This section is for cluster administrators.
To create an
OIDC provider at AWS,
AWS requires an intermediate Certificate Authority (CA)
or server certificate thumbprint. Your OIDC discovery credentials are stored on
storage.googleapis.com
, with a certificate signed by an intermediate CA named
GTS CA 1C3
. The SHA-1 thumbprint
of its intermediate CA GTS CA 1C3
is 08745487E891C19E3078C1F2A07E452950EF36F6
.
To register your OIDC discovery bucket as an OIDC provider with AWS, perform the following steps:
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Save the OIDC issuer URL, issuer host path, and Cloud Storage thumbprint in environment variables.
ISSUER_URL=$(env HTTPS_PROXY=http://localhost:8118 \ kubectl get awscluster ${CLUSTER_NAME} -o jsonpath='{.status.workloadIdentityInfo.issuerURL}') ISSUER_HOSTPATH=${ISSUER_URL#"https://"} CA_THUMBPRINT=08745487E891C19E3078C1F2A07E452950EF36F6
Use the
aws
command-line tool to create an OIDC provider on AWS.aws iam create-open-id-connect-provider \ --url ${ISSUER_URL} \ --thumbprint-list ${CA_THUMBPRINT} \ --client-id-list sts.amazonaws.com
Update the thumbprint
If Google rotates the CA for storage.googleapis.com
, run the
following commands:
Copy the updated certificate thumbprint,
08745487E891C19E3078C1F2A07E452950EF36F6
.Follow the instructions for the
aws iam update-open-id-connect-provider-thumbprint
command. Usestorage.googleapis.com
as the target hostname and08745487E891C19E3078C1F2A07E452950EF36F6
as the thumbprint.
Creating AWS IAM roles and policies
This section is for cluster administrators.
Create an AWS IAM role to bind to a Kubernetes service account. The
IAM role has permissions for sts:AssumeRoleWithWebIdentity
.
To create the role, perform the following steps:
Find or create an AWS IAM policy that grants the necessary permissions for your workloads.
You need the policy's Amazon resource name (ARN) AWS IAM policy. For example,
arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
.Set environment variables with your authentication information.
KSA_NAME=KUBERNETES_SERVICE_ACCOUNT WORKLOAD_NAMESPACE=WORKLOAD_IDENTITY_NAMESPACE AWS_ROLE_NAME=AWS_ROLE_NAME AWS_POLICY=EXISTING_AWS_POLICY
Replace the following:
- KUBERNETES_SERVICE_ACCOUNT: the name of the new Kubernetes service account
- WORKLOAD_IDENTITY_NAMESPACE: the name of the namespace where workloads run
- AWS_ROLE_NAME: the name for a new AWS role for your workloads
- EXISTING_AWS_POLICY: the Amazon resource name (ARN) of
an existing
AWS IAM policy
For example,
arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Create an AWS IAM policy that allows your user cluster to assume temporary security credentials with the AWS Security Token Service:
CLUSTER_ID=$(env HTTPS_PROXY=http://localhost:8118 \ kubectl get awscluster ${CLUSTER_NAME} -o jsonpath='{.status.clusterID}') # Get the ID Provider ARN PROVIDER_ARN=$(aws iam list-open-id-connect-providers \ | jq '.OpenIDConnectProviderList' \ | jq ".[] | select(.Arn | contains(\"${CLUSTER_ID}\"))" \ | jq '.Arn' | tr -d '"') # Create AWS role and policy cat > irp-trust-policy.json << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "${PROVIDER_ARN}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${ISSUER_HOSTPATH}:sub": "system:serviceaccount:${WORKLOAD_NAMESPACE}:${KSA_NAME}" } } } ] } EOF
To create an AWS IAM role with this policy and attach your existing policy to the role, perform the following commands:
aws iam create-role \ --role-name ${AWS_ROLE_NAME} \ --assume-role-policy-document file://irp-trust-policy.json aws iam update-assume-role-policy \ --role-name ${AWS_ROLE_NAME} \ --policy-document file://irp-trust-policy.json aws iam attach-role-policy \ --role-name ${AWS_ROLE_NAME} \ --policy-arn ${AWS_POLICY}
The
aws
command-line tool confirms that the policy is attached to your role.
Creating Kubernetes service accounts for workloads
This section is for developers or cluster administrators.
To create Kubernetes service accounts bound to the AWS IAM role that was specified previously, perform the following steps:
From your
anthos-aws
directory, useanthos-gke
to switch context to your user cluster. Replace CLUSTER_NAME with your user cluster name.cd anthos-aws env HTTPS_PROXY=http://localhost:8118 \ anthos-gke aws clusters get-credentials CLUSTER_NAME
Create the Kubernetes service account by running the following commands:
S3_ROLE_ARN=$(aws iam get-role \ --role-name AWS_ROLE_NAME \ --query Role.Arn --output text) cat << EOF > k8s-service-account.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ${KSA_NAME} namespace: WORKLOAD_IDENTITY_NAMESPACE EOF env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f k8s-service-account.yaml env HTTPS_PROXY=http://localhost:8118 \ kubectl annotate sa --namespace ${WORKLOAD_NAMESPACE} ${KSA_NAME} eks.amazonaws.com/role-arn=${S3_ROLE_ARN}
Replace the following:
AWS_ROLE_NAME
: the name of the AWS IAM role to apply to your workloadsWORKLOAD_IDENTITY_NAMESPACE
: the name of the namespace where workloads run
Applying credentials to your Pods
This section is for developers.
This section assumes that you have deployed the workload identity webhook. If you haven't deployed the webhook, skip to Applying credentials without the webhook.
Apply credentials with the webhook
This section describes how to configure your Pods to read credentials made available by the webhook.
Add the service account to the Pod
To use workload identity with a workload, add the Kubernetes service account to the following fields:
- For a Deployment:
spec.template.spec.serviceAccountName
- For a Pod:
spec.serviceAccount
The following Pod manifest launches a base CentOS image and
contains the spec.serviceAccount
field.
apiVersion: v1
kind: Pod
metadata:
name: sample-centos-pod
namespace: WORKLOAD_IDENTITY_NAMESPACE
spec:
containers:
- command:
- /bin/bash
- -ec
- while :; do echo '.'; sleep 500 ; done
image: amazon/aws-cli
name: centos
serviceAccount: KUBERNETES_SERVICE_ACCOUNT
Replace the following:
WORKLOAD_IDENTITY_NAMESPACE
: the name of the namespace where workloads runKUBERNETES_SERVICE_ACCOUNT
: the name of the Kubernetes service account you created previously
Check if Pods have the environment variables set
To check if Pods have the environment variables set, run the following command to get the Pod's information:
kubectl get pod --namespace WORKLOAD_IDENTITY_NAMESPACE POD_NAME -o yaml
Replace the following:
WORKLOAD_IDENTITY_NAMESPACE
: the name of the namespace where workloads runPOD_NAME
: the name of the Pod to check
The output contains the environment variable values in
spec.containers.command.env
and the mount point for the AWS IAM token. An
example Pod manifest follows.
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- command:
- /bin/bash
- -ec
- while :; do echo '.'; sleep 500 ; done
env:
- name: AWS_ROLE_ARN
value: arn:aws:iam::1234567890:role/my-example-workload-role-1
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: amazon/aws-cli
imagePullPolicy: IfNotPresent
name: centos
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: my-k8s-serviceaccount-token-d4nz4
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
serviceAccount: my-k8s-serviceaccount
serviceAccountName: my-k8s-serviceaccount
volumes:
- name: aws-iam-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
- name: my-k8s-serviceaccount-token-d4nz4
secret:
defaultMode: 420
secretName: my-k8s-serviceaccount-token-d4nz4
...
status:
...
Apply credentials without the webhook
If you do not deploy the workload identity webhook, you need to do the following:
Check if the following environment variables are set:
AWS_ROLE_ARN
: the Amazon Resource Name (ARN) of the IAM roleAWS_WEB_IDENTITY_TOKEN_FILE
: the path where the token is stored
Create a mount point for the IAM token (
aws-iam-token
) and the service account associated with the AWS IAM role
Create a Pod with credentials for workload identity
To create a Pod that includes the necessary credentials for workload identity, perform the following steps:
Copy the following Pod manifest into a file named
sample-pod-no-webhook.yaml
. The configuration launches a base CentOS image with the necessary credentials.apiVersion: v1 kind: Pod metadata: name: sample-centos-pod-no-webhook namespace: WORKLOAD_IDENTITY_NAMESPACE spec: containers: - command: - /bin/bash - -ec - while :; do echo '.'; sleep 500 ; done image: centos:7 name: centos env: - name: AWS_ROLE_ARN value: IAM_ROLE_ARN - name: AWS_WEB_IDENTITY_TOKEN_FILE value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token volumeMounts: - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount name: aws-iam-token readOnly: true volumes: - name: aws-iam-token projected: defaultMode: 420 sources: - serviceAccountToken: audience: sts.amazonaws.com expirationSeconds: 86400 path: token serviceAccount: KUBERNETES_SERVICE_ACCOUNT
Replace the following:
WORKLOAD_IDENTITY_NAMESPACE
: the name of the namespace where workloads run.IAM_ROLE_ARN
: the ARN of the IAM role granted to the Pod. For example,arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
.KUBERNETES_SERVICE_ACCOUNT
: the name of the Kubernetes service account you created previously.
Apply the Pod manifest to your cluster by using
kubectl
:env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f sample-pod-no-webhook.yaml
Check if Pods can access AWS resources
The following procedure describes how to check whether the Pod has received the credentials necessary for workload identity to function.
To complete the steps, you need to have the following:
bash
shell access to the container; most production images don't have a shell available. The following example shows you how to use the Pod specified in the preceding section to access AWS S3.Your Pod needs to have outbound access to the internet to download the AWS command-line interface.
To check if the Pod can access an S3 bucket, perform the following steps:
Use
kubectl exec
to launch an interactive bash shell on the Podsample-centos-pod-no-webhook
:env HTTPS_PROXY=http://localhost:8118 \ kubectl exec -it --namespace ${WORKLOAD_NAMESPACE} sample-centos-pod-no-webhook -- bash
Your terminal opens the bash shell on the Pod.
Check the AWS IAM permissions and credentials by using the
aws
tool:aws sts assume-role-with-web-identity \ --role-arn ${AWS_ROLE_ARN} \ --role-session-name mh9test \ --web-identity-token file:///var/run/secrets/eks.amazonaws.com/serviceaccount/token \ --duration-seconds 1000
The
aws
tool prints credentials information similar to the following:{ "AssumedRoleUser": { "AssumedRoleId": "AROAR2ZZZLEXVSDCDJ37N:mh9test", "Arn": "arn:aws:sts::126285863215:assumed-role/my-example-workload-role-1/mh9test" }, "Audience": "sts.amazonaws.com", "Provider": "arn:aws:iam::126285863215:oidc-provider/storage.googleapis.com/gke-issuer-cec6c353", "SubjectFromWebIdentityToken": "system:serviceaccount:default:my-s3-reader-ksa", "Credentials": { "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "SessionToken": "MY_TOKEN", "Expiration": "2020-08-14T22:46:36Z", "AccessKeyId": "AKIAIOSFODNN7EXAMPLE" } }
If you see the following message, check that the bucket is publicly accessible:
An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
Upgrading the webhook
If you created a Kubernetes 1.18 or lower cluster with workload identity enabled
and the workload identity webhook version release-0.2.2-gke.0
, you must
upgrade the webhook before upgrading to Kubernetes 1.19.
To upgrade the webhook, perform the following steps:
Confirm the webhook is installed by running the following commands:
env HTTPS_PROXY=http://localhost:8118 \ kubectl get MutatingWebhookConfiguration
If your cluster has the webhook deployed, the output includes the following:
NAME WEBHOOKS AGE pod-identity-webhook 1 11m
If the webhook is not deployed on your cluster, you can skip the following steps.
If you saved the
aws-webhook.yaml
file, you can delete the manifest. If you don't have this file available, you can delete the webhook's components manually. Choose from file or components below.File
If you still have the
aws-webhook.yaml
file, run the following command to delete the webhook:env HTTPS_PROXY=http://localhost:8118 \ kubectl delete -f aws-webhook.yaml
Components
To delete the webhook's components manually, run the following commands:
env HTTPS_PROXY=http://localhost:8118 \ kubectl delete namespace WEBHOOK_NAMESPACE env HTTPS_PROXY=http://localhost:8118 \ kubectl delete clusterrole pod-identity-webhook env HTTPS_PROXY=http://localhost:8118 \ kubectl delete clusterrolebinding pod-identity-webhook env HTTPS_PROXY=http://localhost:8118 \ kubectl delete mutatingwebhookconfiguration pod-identity-webhook
Replace WEBHOOK_NAMESPACE with the namespace where you installed the workload identity webhook. For example—
workload-identity-webhook
.Check if you have any remaining certificate signing requests (CSRs) by run the following command:
env HTTPS_PROXY=http://localhost:8118 \ kubectl get csr |grep pod-identity-webhook
If the output is blank, skip to the next step. If there are any remaining CSRs, the
kubectl
command will list existing CSRs. To remove the CSRs, run the following command:env HTTPS_PROXY=http://localhost:8118 \ kubectl delete csr $(kubectl get csr -o \ jsonpath="{.items[?(@.spec.username==\"system:serviceaccount:WEBHOOK_NAMESPACE:pod-identity-webhook\")].metadata.name}")
Replace WEBHOOK_NAMESPACE with the namespace where you installed the workload identity webhook. For example—
workload-identity-webhook
.Follow the steps in Create the webhook to deploy the new webhook version.
After you deploy the new webhook version, you need to restart the Pods that use the webhook. You can restart your Pods by Upgrading a user cluster.
Cleaning up
This section shows you how to remove resources that you created earlier in this document.
Clean up the service account and its associated IAM role
To delete the service account and its associated IAM role, perform the following steps:
Clean up the service account:
env HTTPS_PROXY=http://localhost:8118 \ kubectl delete sa KUBERNETES_SERVICE_ACCOUNT --namespace WORKLOAD_IDENTITY_NAMESPACE
Replace the following:
KUBERNETES_SERVICE_ACCOUNT
: the name of the new Kubernetes service accountWORKLOAD_IDENTITY_NAMESPACE
: the name of the namespace where workloads run
Clean up the AWS IAM role. Choose from one of the following:
Delete the AWS IAM role with the AWS console.
Delete the role with the AWS command-line tool using the following commands:
aws iam detach-role-policy \ --role-name=${AWS_ROLE_NAME} \ --policy-arn=${AWS_POLICY} aws iam delete-role --role-name=${AWS_ROLE_NAME}
Delete your user cluster
To delete your user cluster, perform the steps in Uninstalling GKE on AWS.
Clean up the AWS OIDC provider
After the user cluster is deleted, unregister and delete the OIDC provider on
AWS by using either the following bash
shell command or the AWS console.
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
Delete the role with the AWS command-line tool with the following commands:
CLUSTER_ID=$(env HTTPS_PROXY=http://localhost:8118 \ kubectl get awscluster ${CLUSTER_NAME} -o jsonpath='{.status.clusterID}') PROVIDER_ARN=$(aws iam list-open-id-connect-providers \ | jq '.OpenIDConnectProviderList' \ | jq ".[] | select(.Arn | contains(\"${CLUSTER_ID}\"))" \ | jq '.Arn' | tr -d '"') aws iam delete-open-id-connect-provider \ --open-id-connect-provider-arn=${PROVIDER_ARN}
You receive confirmation that the AWS OIDC provider is deleted.
What's next
- Learn about AWS IAM Roles for Service Accounts (IRSA), which GKE on AWS uses for workload identity.
- Learn about Using workload identity with Google Cloud.