Deploy Autopilot workloads on Arm architecture


This page shows you how to configure your Google Kubernetes Engine (GKE) Autopilot deployments to request nodes that are backed by Arm architecture.

About Arm architecture in Autopilot

Autopilot clusters offer compute classes for workloads that have specific hardware requirements. Some of these compute classes support multiple CPU architectures, such as amd64 and arm64.

Use cases for Arm nodes

Nodes with Arm architecture offer more cost-efficient performance than similar x86 nodes. You should select Arm for your Autopilot workloads in situations such as the following:

  • Your environment relies on Arm architecture for building and testing.
  • You're developing applications for Android devices that run on Arm CPUs.
  • You use multi-arch images and want to optimize costs while running your workloads.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update.

How to request Arm nodes in Autopilot

To tell Autopilot to run your Pods on Arm nodes, specify one of the following labels in a nodeSelector or node affinity rule:

  • kubernetes.io/arch: arm64. GKE places Pods on T2A machine types by default. If T2A machines are unavailable, GKE places Pods on C4A machine types.
  • cloud.google.com/machine-family: ARM_MACHINE_SERIES: Replace ARM_MACHINE_SERIES with an Arm machine series like C4A or T2A. GKE places Pods on the specified series.

By default, using either of the labels lets GKE place other Pods on the same node if there's availability capacity on that node. To request a dedicated node for each Pod, add the cloud.google.com/compute-class: Performance label to your manifest. For details, see Optimize Autopilot Pod performance by choosing a machine series.

Or, you can use the Scale-Out label with the arm64 label to request T2A. You can also request Arm architecture for Spot Pods.

When you deploy your workload, Autopilot does the following:

  1. Automatically provisions Arm nodes to run your Pods.
  2. Automatically taints the new nodes to prevent non-Arm Pods from being scheduled on those nodes.
  3. Automatically adds a toleration to your Arm Pods to allow scheduling on the new nodes.

Example request for Arm architecture

The following example specifications show you how to use a node selector or a node affinity rule to request Arm architecture in Autopilot.

nodeSelector

The following example manifest shows you how to request Arm nodes in a nodeSelector:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-arm
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-arm
  template:
    metadata:
      labels:
        app: nginx-arm
    spec:
      nodeSelector:
        cloud.google.com/compute-class: Performance
        kubernetes.io/arch: arm64
      containers:
      - name: nginx-arm
        image: nginx
        resources:
          requests:
            cpu: 2000m
            memory: 2Gi

nodeAffinity

You can use node affinity to request Arm nodes. You can also specify the type of node affinity to use:

  • requiredDuringSchedulingIgnoredDuringExecution: Must use the specified compute class and architecture.
  • preferredDuringSchedulingIgnoredDuringExecution: Use the specified compute class and architecture on a best-effort basis. For example, if an existing x86 node is allocatable, GKE places your Pod on the x86 node instead of provisioning a new Arm node. Unless you're using a multi-arch image manifest, your Pod will crash. We strongly recommend that you explicitly request the specific architecture that you want.

The following example manifest requires the Performance class and Arm nodes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-arm
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-arm
  template:
    metadata:
      labels:
        app: nginx-arm
    spec:
      terminationGracePeriodSeconds: 25
      containers:
      - name: nginx-arm
        image: nginx
        resources:
          requests:
            cpu: 2000m
            memory: 2Gi
            ephemeral-storage: 1Gi
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: cloud.google.com/compute-class
                operator: In
                values:
                - Performance
              - key: kubernetes.io/arch
                operator: In
                values:
                - arm64

Recommendations

  • Build and use multi-arch images as part of your pipeline. Multi-arch images ensure that your Pods run even if they're placed on x86 nodes.
  • Explicitly request architecture and compute classes in your workload manifests. If you don't, Autopilot uses the default architecture of the selected compute class, which might not be Arm.

Availability

You can deploy Autopilot workloads on Arm architecture in Google Cloud locations that support Arm architecture. For details, see Available regions and zones.

Troubleshooting

For common errors and troubleshooting information, refer to Troubleshooting Arm workloads.

What's next