[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-03。"],[],[],null,["Overview\n\nTo attach a cluster means to connect it to Google Cloud by registering it with\nGoogle Cloud [Fleet management](/anthos/fleet-management/docs) and\ninstalling the GKE attached clusters software on it.\n\nYou can attach a cluster using the gcloud CLI or Terraform. To learn\nhow to create and attach an EKS cluster using Terraform, check the\n[GitHub repository of samples for GKE attached clusters](https://github.com/GoogleCloudPlatform/anthos-samples/tree/main/anthos-attached-clusters).\n\nThis page is for IT administrators and Operators who want to set up,\nmonitor, and manage cloud infrastructure. To learn more about common roles and\nexample tasks that we reference in Google Cloud content, see\n[Common GKE user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nTo attach an EKS cluster using gcloud, perform the\nfollowing steps.\n\nPrerequisites\n\nEnsure that your cluster meets the [cluster requirements](/kubernetes-engine/multi-cloud/docs/attached/eks/reference/cluster-prerequisites).\n\nWhen attaching your cluster, you must specify:\n\n- a supported Google Cloud [administrative region](/kubernetes-engine/multi-cloud/docs/attached/eks/reference/supported-regions) and\n- a platform version.\n\nThe administrative region is a Google Cloud region\nto administer your attached cluster from. You can choose any supported\nregion, but best practice is to choose the region geographically closest to\nyour cluster. No user data is stored in the administrative region.\n\nThe platform version is the version of GKE attached clusters to be installed on your\ncluster. You can list all supported versions by running the following command: \n\n gcloud container attached get-server-config \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e with the name of the\nGoogle Cloud location to administer your cluster from.\n\nPlatform version numbering\n\nThese documents refer to the GKE attached clusters version as the platform version,\nto distinguish it from the Kubernetes version. GKE attached clusters uses the same\nversion numbering convention as GKE - for example, 1.21.5-gke.1. When attaching\nor updating your cluster, you must choose a platform version whose minor version\nis the same as or one level below the Kubernetes version of your cluster. For\nexample, you can attach a cluster running Kubernetes v1.22.\\* with\nGKE attached clusters platform version 1.21.\\* or 1.22.\\*.\n\nThis lets you upgrade your cluster to the next minor version before upgrading\nGKE attached clusters.\n\nAttach an EKS cluster **Note:** The default number of clusters that you can attach per project is 50. To increase this quota, contact [Google Cloud support](/support).\n\nTo attach your EKS cluster to Google Cloud\n[Fleet management](/anthos/fleet-management/docs), perform the following steps:\n\n1. Ensure that your kubeconfig file has an entry for the cluster you'd like\n to attach:\n\n aws eks update-kubeconfig --region \u003cvar translate=\"no\"\u003eAWS_REGION\u003c/var\u003e \\\n --name \u003cvar translate=\"no\"\u003eEKS_CLUSTER_NAME\u003c/var\u003e\n\n2. Retrieve the OIDC issuer URL with the following command:\n\n aws eks describe-cluster \\\n --region \u003cvar translate=\"no\"\u003eAWS_REGION\u003c/var\u003e \\\n --name \u003cvar translate=\"no\"\u003eEKS_CLUSTER_NAME\u003c/var\u003e \\\n --query \"cluster.identity.oidc.issuer\" \\\n --output text\n\n The output of this command is the URL of your OIDC issuer. Save this value\n for use later.\n3. Run this command to extract your cluster's kubeconfig context and\n store it in the `KUBECONFIG_CONTEXT` environment variable:\n\n KUBECONFIG_CONTEXT=$(kubectl config current-context)\n\n4. Use the\n [`gcloud container attached clusters register` command](/sdk/gcloud/reference/container/attached/clusters/register) to register the cluster:\n\n gcloud container attached clusters register \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --location=\u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e \\\n --fleet-project=\u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e \\\n --platform-version=\u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e \\\n --distribution=eks \\\n --issuer-url=\u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e \\\n --context=\u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e \\\n --kubeconfig=\u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e\n\n Replace:\n\n- \u003cvar translate=\"no\"\u003eAWS_REGION\u003c/var\u003e: the AWS region where your EKS cluster is located\n- \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your cluster. This name can be the same \u003cvar translate=\"no\"\u003eEKS_CLUSTER_NAME\u003c/var\u003e you used in the preceding steps. The \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e must be compliant with the [RFC 1123 Label Names standard](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).\n- \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_REGION\u003c/var\u003e: the Google Cloud region to administer your cluster\n- \u003cvar translate=\"no\"\u003ePLATFORM_VERSION\u003c/var\u003e: the GKE attached clusters version to use for the cluster\n- \u003cvar translate=\"no\"\u003ePROJECT_NUMBER\u003c/var\u003e: the fleet host project where the cluster will be registered\n- \u003cvar translate=\"no\"\u003eISSUER_URL\u003c/var\u003e: the issuer URL retrieved earlier\n- \u003cvar translate=\"no\"\u003eKUBECONFIG_CONTEXT\u003c/var\u003e: context in the kubeconfig for accessing the EKS cluster, as extracted earlier\n- \u003cvar translate=\"no\"\u003eKUBECONFIG_PATH\u003c/var\u003e: path to your kubeconfig\n\n | **Note:** If attaching your cluster fails, the system automatically rolls back any changes made to Google Cloud resources related to the cluster, such as workload identity pool. This means the connection between your cluster and GKE attached clusters isn't established, but your actual EKS cluster remains unaffected. You can try again to attach the cluster after fixing the issue that caused the failure.\n\nAuthorize Cloud Logging / Cloud Monitoring **Note:** Starting with Kubernetes version 1.28, manual policy binding to authorize the `gke-system/gke-telemetry-agent` service account for log and metric collection is no longer necessary. The required permissions are now automatically granted to this service account. You can therefore disregard this section.\n\nIn order for GKE attached clusters to create and upload system logs and metrics to\nGoogle Cloud, it must be authorized.\n\nTo authorize the Kubernetes workload identity `gke-system/gke-telemetry-agent`\nto write logs to Google Cloud Logging, and metrics to Google Cloud Monitoring,\nrun this command: \n\n gcloud projects add-iam-policy-binding \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e \\\n --member=\"serviceAccount:\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e.svc.id.goog[gke-system/gke-telemetry-agent]\" \\\n --role=roles/gkemulticloud.telemetryWriter\n\nReplace \u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e with the cluster's Google Cloud project ID.\n\nThis IAM binding grants access for all clusters in the Google Cloud project project to\nupload logs and metrics. You only need to run it after creating your\nfirst cluster for the project.\n\nAdding this IAM binding will fail unless at least one cluster has been\ncreated in your Google Cloud project. This is because the workload identity pool\nit refers to (\u003cvar translate=\"no\"\u003eGOOGLE_PROJECT_ID\u003c/var\u003e`.svc.id.goog`) is\nnot provisioned until cluster creation."]]