Jump to Content
Containers & Kubernetes

Deploying high-throughput workloads on GKE Autopilot with the Scale-Out compute class

July 19, 2022
William Denniss

Product Manager, Google Kubernetes Engine

Gari Singh

Product Manager

GKE Autopilot is a full-featured, fully managed Kubernetes platform that combines the full power of the Kubernetes API with a hands-off approach to cluster management and operations. Since launching Autopilot last year, we’ve continued to innovate, adding capabilities to meet the demands of your workloads. We’re excited to introduce the concept of compute classes in Autopilot, together with the Scale-Out compute class, which offers high performance x86 and Arm compute, now available in Preview.

Autopilot compute classes are a curated set of hardware configurations on which you can deploy your workloads. In this initial release, we are introducing the Scale-Out compute class, which is designed for workloads that are optimized for a single-thread-per-core and scale horizontally. The Scale-Out compute class currently supports two hardware architectures — x86 and Arm — allowing you to choose whichever one offers the best price-performance for your specific workload. The Scale-Out compute class joins our original, general-purpose compute option and is designed for running workloads that benefit from the fastest CPU platforms available on Google Cloud, and with greater cost-efficiency for applications that have high CPU utilization.

We also heard from you that some workloads would benefit from higher-performance compute. To serve this need, x86 workloads running on the Scale-Out compute class are currently served by 3rd Gen AMD EPYCTM processors, with Simultaneous Multithreading (SMT) disabled, achieving the highest per-core benchmark among x86 platforms in Google Cloud.

And for the first time, Autopilot supports Arm workloads. Currently utilizing the new Tau T2A VMs running on Ampere® Altra® Arm-based processors, the Scale-Out compute class gives your Arm workloads price-performance benefits combined with a thriving, open, end-to-end platform independent ecosystem. Autopilot Arm Pods are currently available in us-central, europe-west4, and asia-southeast1.

Deploying Arm workloads using the Scale-Out compute class

To deploy your Pods on a specific compute class and CPU, simply add a Kubernetes nodeSelector or node affinity rule with the following labels in your deployment specification:


To run an Arm workload on Autopilot, you need a cluster running version 1.24.1-gke.1400 or later and in one of the supported regions. You can create a new cluster at this version, or upgrade an existing one. To create a new Arm-supported cluster on the CLI, use the following:


For example, the following Deployment specification will deploy the official Nginx image on the Arm architecture:


Deploying x86 workloads on the Scale-Out compute class

The Scale-out compute class also supports the x86 architecture by simply adding a selector for the `Scale-Out` compute class. You can either explicitly set the architecture with kubernetes.io/arch: amd64 or omit that label from the selector, as x86 is the default.

To run an x86 Scale-Out workload on Autopilot, you need a cluster running version 1.24.1-gke.1400 or later and in one of the supported regions. The same CLI command from the example above will get you an x86 Scale-Out-capable GKE Autopilot cluster.


Deploying Spot Pods using the Scale-Out compute class

You can also combine compute classes with Spot Pods by adding the label cloud.google.com/gke-spot: “true” to the nodeSelector:


Spot Pods are supported for both the x86 and Arm architectures when using the Scale-Out compute class.

Try the Scale-Out compute class on GKE Autopilot today!

To help you get started, check out our guides on creating an Autopilot cluster, getting started with compute classes, building images for Arm workloads, and deploying Arm workloads on GKE Autopilot.

Posted in