工作負載身分會使用 Google Cloud IAM 權限控管資源存取權Google Cloud 。透過工作負載身分,您可以為每個工作負載指派不同的 IAM 角色。您可以透過這項精細的權限控管機制,遵循最低權限原則。如果沒有 Workload Identity,您必須將 Google Cloud IAM 角色指派給 GKE on AWS 節點,讓這些節點上的所有工作負載都擁有與節點本身相同的權限。
每個 Google Cloud 專案都會自動建立代管工作負載身分集區,名稱格式為 PROJECT_ID.svc.id.goog。同樣地,Google Cloud 會建立名稱符合 https://gkehub.googleapis.com/projects/PROJECT_ID/locations/global/memberships/MEMBERSHIP_ID 模式的 ID 提供者。如要進一步瞭解 Workload Identity 集區,請參閱「啟用 Fleet 的元件」。如以下範例所示,使用專案 ID 和成員 ID 撰寫這些名稱:
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-31 (世界標準時間)。"],[],[],null,["# Using workload identity with Google Cloud\n\nThis guide describes how to configure workload identity on\nGKE on AWS to control workload access to GCP resources. It includes an\nexample of how to access Google Cloud resources from your cluster using\nthe identity.\n\nFor information about using workload identities with AWS IAM accounts to control\naccess to AWS resources, see\n[Using workload identity with AWS](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/workload-identity-aws).\n\nOverview\n--------\n\nWorkload identity uses Google Cloud IAM permissions to control access to\nGoogle Cloud resources. With workload identity, you can assign different IAM roles to\neach workload. This fine-grained control of permissions lets you follow the\nprinciple of\n[least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege).\nWithout workload identity, you must assign Google Cloud IAM roles to your\nGKE on AWS nodes, giving all workloads on those nodes the same\npermissions as the node itself.\n\nPrerequisites\n-------------\n\n- [Create a user cluster](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/creating-user-cluster) with\n Kubernetes version v1.20 or later.\n\n- If your AWS VPC uses a proxy or firewall, allowlist the following URLs:\n\n - `securetoken.googleapis.com`\n - `iamcredentials.googleapis.com`\n - `sts.googleapis.com`\n- From your `anthos-aws` directory, use\n `anthos-gke` to switch context to your user cluster.\n\n ```sh\n cd anthos-aws\n env HTTPS_PROXY=http://localhost:8118 \\\n anthos-gke aws clusters get-credentials CLUSTER_NAME\n ```\n Replace \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e with your user cluster name.\n\n \u003cbr /\u003e\n\n- Enable the four new services required for this feature with the following\n commands:\n\n gcloud services enable securetoken.googleapis.com\n gcloud services enable iam.googleapis.com\n gcloud services enable iamcredentials.googleapis.com\n gcloud services enable sts.googleapis.com\n\n| **Note:** The user cluster field `spec.controlPlane.workloadIdentity.oidcDiscoveryGCSBucket` is not required for Google Cloud workload identity federation. This is because the OIDC JSON Web Key Set (JWKS) info is stored in Google Cloud, which for security reasons does not support OIDC discovery over the internet.\n\nCompose the WI pool and provider names\n--------------------------------------\n\nEach Google Cloud project automatically creates a managed workload\nidentity pool with a name in the form of `PROJECT_ID.svc.id.goog`. Similarly,\nGoogle Cloud creates an identity provider whose name follows the pattern\n`https://gkehub.googleapis.com/projects/PROJECT_ID/locations/global/memberships/MEMBERSHIP_ID`.\nFor more information on workload identity pools, see\n[Fleet-enabled components](/anthos/multicluster-management/fleets#fleet-enabled-components).\nCompose these names from your project ID and membership ID as shown here: \n\n```\nexport PROJECT_ID=USER_PROJECT_NAME\nexport CLUSTER_MEMBERSHIP_ID=PROJECT_MEMBERSHIP_NAME\nexport IDP=\"https://gkehub.googleapis.com/projects/${PROJECT_ID}/locations/global/memberships/${CLUSTER_MEMBERSHIP_ID}\"\nexport WI_POOL=\"${PROJECT_ID}.svc.id.goog\"\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eUSER_PROJECT_NAME\u003c/var\u003e with the user's chosen user project name\n- \u003cvar translate=\"no\"\u003ePROJECT_MEMBERSHIP_NAME\u003c/var\u003e with the cluster's membership name\n\nCreate an IAM policy binding\n----------------------------\n\nCreate a policy binding to allow a Kubernetes service account (KSA) to\nimpersonate a Google Cloud service account (GSA). \n\n```\nexport K8S_NAMESPACE=KUBERNETES_NAMESPACE\nexport KSA_NAME=KUBERNETES_SA_NAME\nexport GCP_SA_EMAIL=\"WORKLOAD_IDENTITY_TEST@${PROJECT_ID}.iam.gserviceaccount.com\"\ngcloud iam service-accounts add-iam-policy-binding \\\n --role roles/iam.workloadIdentityUser \\\n --member \"serviceAccount:$WI_POOL[$K8S_NAMESPACE/$KSA_NAME]\" $GCP_SA_EMAIL\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBERNETES_NAMESPACE\u003c/var\u003e with the Kubernetes namespace where the Kubernetes service account is defined\n- \u003cvar translate=\"no\"\u003eWORKLOAD_IDENTITY_TEST\u003c/var\u003e with a workload name of your choice\n- \u003cvar translate=\"no\"\u003eKUBERNETES_SA_NAME\u003c/var\u003e with the name of the Kubernetes service account attached to the application\n\n| **Note:** All KSAs with the same name and namespace in this project are impersonated to the same GSA. This permits workloads to be portable between clusters in the same project.\n\nCreate an SDK config map\n------------------------\n\nExecute the shell script below to store the workload identity details in a\nConfigMap. When a Pod mounts the ConfigMap, the\nGoogle Cloud CLI can read the workload identity details. \n\n```\ncat \u003c\u003c EOF \u003e cfmap.yaml\nkind: ConfigMap\napiVersion: v1\nmetadata:\n namespace: ${K8S_NAMESPACE}\n name: my-cloudsdk-config\ndata:\n config: |\n {\n \"type\": \"external_account\",\n \"audience\": \"identitynamespace:${WI_POOL}:${IDP}\",\n \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${GCP_SA_EMAIL}:generateAccessToken\",\n \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\",\n \"token_url\": \"https://sts.googleapis.com/v1/token\",\n \"credential_source\": {\n \"file\": \"/var/run/secrets/tokens/gcp-ksa/token\"\n }\n }\nEOF\n\nenv HTTPS_PROXY=http://localhost:8118 \\\n kubectl apply -f cfmap.yaml\n```\n\nCreate a Kubernetes service account\n-----------------------------------\n\nCreate a KSA on your user cluster with same name and namespace as was used in\nthe IAM binding. \n\n```\ncat \u003c\u003c EOF \u003e k8s-service-account.yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: ${KSA_NAME}\n namespace: ${K8S_NAMESPACE}\nEOF\n\nenv HTTPS_PROXY=http://localhost:8118 \\\n kubectl apply -f k8s-service-account.yaml\n```\n\nCreate a Pod\n------------\n\nNext, create a Pod with the service account token projection and ConfigMap\ncreated above.\n\n1. Create the sample Pod yaml file.\n\n cat \u003c\u003c EOF \u003e sample-pod.yaml\n apiVersion: v1\n kind: Pod\n metadata:\n name: sample-pod\n namespace: ${K8S_NAMESPACE}\n spec:\n serviceAccountName: ${KSA_NAME}\n containers:\n - command:\n - /bin/bash\n - -c\n - while :; do echo '.'; sleep 500 ; done\n image: google/cloud-sdk\n name: cloud-sdk\n env:\n - name: GOOGLE_APPLICATION_CREDENTIALS\n value: /var/run/secrets/tokens/gcp-ksa/google-application-credentials.json\n volumeMounts:\n - name: gcp-ksa\n mountPath: /var/run/secrets/tokens/gcp-ksa\n readOnly: true\n volumes:\n - name: gcp-ksa\n projected:\n defaultMode: 420\n sources:\n - serviceAccountToken:\n path: token\n audience: ${WI_POOL}\n expirationSeconds: 172800\n - configMap:\n name: my-cloudsdk-config\n optional: false\n items:\n - key: \"config\"\n path: \"google-application-credentials.json\"\n EOF\n\n2. Apply the Pod's YAML to your cluster.\n\n ```\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl apply -f sample-pod.yaml\n ```\n\nUsing Google Cloud workload identity\n------------------------------------\n\n### Supported SDK versions\n\nTo use the Google Cloud workload identity feature, you must build your\ncode with an SDK that supports it. For a list of SDK versions that support\nGoogle Cloud workload identity, see\n[Fleet Workload Identity](/anthos/multicluster-management/fleets/workload-identity#authenticate_from_your_code).\n\n### Sample code using workload identity\n\nThis section includes sample Python code that uses Google Cloud workload\nidentity. The service account in this example uses an identity with\n[\"Cloud Storage Admin\"](/storage/docs/access-control/iam-roles) privileges\nto list all of the Google Cloud project's Cloud Storage buckets.\n\n1. Run a shell within the Pod.\n\n ```\n env HTTPS_PROXY=http://localhost:8118 \\\n kubectl exec -it sample-pod -- bash\n ```\n2. Run a script to list the project's storage buckets.\n\n ```\n # execute these commands inside the Pod\n pip install --upgrade google-cloud-storage\n\n cat \u003c\u003c EOF \u003e sample-list-bucket.py\n from google.cloud import storage\n storage_client = storage.Client()\n buckets = storage_client.list_buckets()\n\n for bucket in buckets:\n print(bucket.name)\n EOF\n\n env GOOGLE_CLOUD_PROJECT=USER_PROJECT_NAME \\\n python3 sample-list-bucket.py\n ```\n\n Replace \u003cvar translate=\"no\"\u003eUSER_PROJECT_NAME\u003c/var\u003e with your Google Cloud project.\n\nFor further information\n-----------------------\n\n- [Fleet Workload Identity](https://cloud.google.com/anthos/multicluster-management/fleets/workload-identity)\n- [Workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation)\n- [Access resources from an OIDC identity provider](https://cloud.google.com/iam/docs/access-resources-oidc) (Kubernetes clusters are OIDC identity providers)\n- [Using workload identity with AWS](/kubernetes-engine/multi-cloud/docs/aws/previous-generation/how-to/workload-identity-aws)"]]