kubernetes.io/arch: arm64。對於執行 1.31.3-gke.1056000 以上版本的叢集,GKE 預設會將 Pod 放置在 C4A機器
類型上。
如果叢集執行的是舊版,GKE 會將 Pod 放置在 T2A 機型上。
cloud.google.com/machine-family: ARM_MACHINE_SERIES。
將 ARM_MACHINE_SERIES 替換為 Arm 機型系列,例如 C4A 或 T2A。GKE 會將 Pod 放置在指定的系列中。
根據預設,使用任一標籤時,如果節點有可用容量,GKE 會將其他 Pod 放置在同一節點上。如要為每個 Pod 要求專屬節點,請在資訊清單中加入 cloud.google.com/compute-class:
Performance 標籤。詳情請參閱「選擇機器系列,最佳化 Autopilot Pod 效能」。
或者,您也可以搭配使用 Scale-Out 和 arm64 標籤,要求 T2A。
您也可以為 Spot Pod 要求 Arm 架構。
preferredDuringSchedulingIgnoredDuringExecution:盡力使用指定的運算類別和架構。舉例來說,如果現有的 x86 節點可供分配,GKE 會將 Pod 放在 x86 節點上,而不是佈建新的 Arm 節點。除非您使用多架構映像檔資訊清單,否則 Pod 會當機。強烈建議您明確要求所需的特定架構。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-01 (世界標準時間)。"],[],[],null,["# Deploy Autopilot workloads on Arm architecture\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview)\n\n*** ** * ** ***\n\nThis page shows you how to configure your Google Kubernetes Engine (GKE)\nAutopilot deployments to request nodes that are backed by Arm\narchitecture.\n\nAbout Arm architecture in Autopilot\n-----------------------------------\n\nAutopilot clusters offer\n[*compute classes*](/kubernetes-engine/docs/concepts/autopilot-compute-classes)\nfor workloads that have specific hardware requirements. Some of these compute\nclasses support multiple CPU architectures, such as `amd64` and `arm64`.\n\nUse cases for Arm nodes\n-----------------------\n\nNodes with Arm architecture offer more cost-efficient performance than similar\nx86 nodes. You should select Arm for your Autopilot workloads in\nsituations such as the following:\n\n- Your environment relies on Arm architecture for building and testing.\n- You're developing applications for Android devices that run on Arm CPUs.\n- You use multi-arch images and want to optimize costs while running your workloads.\n\nBefore you begin\n----------------\n\nBefore you start, make sure that you have performed the following tasks:\n\n- Enable the Google Kubernetes Engine API.\n[Enable Google Kubernetes Engine API](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com)\n- If you want to use the Google Cloud CLI for this task, [install](/sdk/docs/install) and then [initialize](/sdk/docs/initializing) the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running `gcloud components update`. **Note:** For existing gcloud CLI installations, make sure to set the `compute/region` [property](/sdk/docs/properties#setting_properties). If you use primarily zonal clusters, set the `compute/zone` instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: `One of [--zone, --region] must be supplied: Please specify location`. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.\n\n\u003c!-- --\u003e\n\n- Review the [requirements and limitations for Arm\n nodes](/kubernetes-engine/docs/concepts/arm-on-gke#arm-requirements-limitations).\n- Ensure that you have quota for the [C4A](/compute/docs/general-purpose-machines#c4a_series) or [Tau T2A](/compute/docs/general-purpose-machines#t2a_machines) Compute Engine machine types.\n- Ensure that you have a Pod with a container image that's built for Arm architecture.\n\nHow to request Arm nodes in Autopilot\n-------------------------------------\n\nTo tell Autopilot to run your Pods on Arm nodes, specify one of the\nfollowing labels in a\n[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)\nor [node\naffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)\nrule:\n\n- `kubernetes.io/arch: arm64`. GKE places Pods on `C4A` machine types by default for clusters running version 1.31.3-gke.1056000 and later. If the cluster is running an earlier version, GKE places Pods on `T2A` machine types.\n- `cloud.google.com/machine-family: `\u003cvar translate=\"no\"\u003eARM_MACHINE_SERIES\u003c/var\u003e. Replace \u003cvar translate=\"no\"\u003eARM_MACHINE_SERIES\u003c/var\u003e with an Arm machine series like `C4A` or `T2A`. GKE places Pods on the specified series.\n\nBy default, using either of the labels lets GKE place other Pods\non the same node if there's availability capacity on that node. To request a\ndedicated node for each Pod, add the `cloud.google.com/compute-class:\nPerformance` label to your manifest. For details, see [Optimize\nAutopilot Pod performance by choosing a machine\nseries](/kubernetes-engine/docs/how-to/performance-pods).\n\nOr, you can use the `Scale-Out` label with the `arm64` label to request `T2A`.\nYou can also request Arm architecture for [Spot Pods](/kubernetes-engine/docs/how-to/autopilot-spot-pods).\n\nWhen you deploy your workload, Autopilot does the following:\n\n1. Automatically provisions Arm nodes to run your Pods.\n2. Automatically taints the new nodes to prevent non-Arm Pods from being scheduled on those nodes.\n3. Automatically adds a toleration to your Arm Pods to allow scheduling on the new nodes.\n\nExample request for Arm architecture\n------------------------------------\n\nThe following example specifications show you how to use a node selector or a\nnode affinity rule to request Arm architecture in Autopilot. \n\n### nodeSelector\n\nThe following example manifest shows you how to request Arm nodes in a\nnodeSelector: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: nginx-arm\n spec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx-arm\n template:\n metadata:\n labels:\n app: nginx-arm\n spec:\n nodeSelector:\n cloud.google.com/compute-class: Performance\n kubernetes.io/arch: arm64\n containers:\n - name: nginx-arm\n image: nginx\n resources:\n requests:\n cpu: 2000m\n memory: 2Gi\n\n### nodeAffinity\n\nYou can use\n[node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)\nto request Arm nodes. You can also specify the type of node affinity to use:\n\n- `requiredDuringSchedulingIgnoredDuringExecution`: Must use the specified compute class and architecture.\n- `preferredDuringSchedulingIgnoredDuringExecution`: Use the specified compute class and architecture on a best-effort basis. For example, if an existing x86 node is allocatable, GKE places your Pod on the x86 node instead of provisioning a new Arm node. Unless you're using a multi-arch image manifest, your Pod will crash. We strongly recommend that you explicitly request the specific architecture that you want.\n\nThe following example manifest *requires* the `Performance` class and Arm\nnodes: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: nginx-arm\n spec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx-arm\n template:\n metadata:\n labels:\n app: nginx-arm\n spec:\n terminationGracePeriodSeconds: 25\n containers:\n - name: nginx-arm\n image: nginx\n resources:\n requests:\n cpu: 2000m\n memory: 2Gi\n ephemeral-storage: 1Gi\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: cloud.google.com/compute-class\n operator: In\n values:\n - Performance\n - key: kubernetes.io/arch\n operator: In\n values:\n - arm64\n\nRecommendations\n---------------\n\n- [Build and use multi-arch images](/kubernetes-engine/docs/how-to/build-multi-arch-for-arm) as part of your pipeline. Multi-arch images ensure that your Pods run even if they're placed on x86 nodes.\n- Explicitly request architecture and compute classes in your workload manifests. If you don't, Autopilot uses the default architecture of the selected compute class, which might not be Arm.\n\nAvailability\n------------\n\nYou can deploy Autopilot workloads on Arm architecture in\nGoogle Cloud locations that support Arm architecture. For details, see\n[Available regions and zones](/compute/docs/regions-zones#available).\n\nTroubleshooting\n---------------\n\nFor common errors and troubleshooting information, refer to\n[Troubleshooting Arm workloads](/kubernetes-engine/docs/troubleshooting/troubleshooting-arm-workloads).\n\nWhat's next\n-----------\n\n- [Learn more about Autopilot cluster architecture](/kubernetes-engine/docs/concepts/autopilot-architecture).\n- [Learn about the lifecycle of Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/).\n- [Learn about the available Autopilot compute classes](/kubernetes-engine/docs/concepts/autopilot-compute-classes).\n- [Read about the default, minimum, and maximum resource requests for each\n platform](/kubernetes-engine/docs/concepts/autopilot-resource-requests)."]]