About built-in ComputeClasses in GKE


This page describes the ComputeClasses that Google Kubernetes Engine (GKE) installs in your clusters. You learn about the name, availability, and node configuration of each built-in ComputeClass. This page is for platform engineers and app operators who want to make an informed choice about which ComputeClasses are available, and which class is optimal for specific workloads.

You should already be familiar with ComputeClasses.

Overview of built-in ComputeClasses

Many GKE workloads are general-purpose workloads that don't require specialized hardware, such as web servers or small-scale batch jobs. For these workloads, the priority is often to reduce the overhead associated with manually managing the node infrastructure and autoscaling configuration.

GKE has various built-in ComputeClasses for use cases such as running Autopilot workloads in Standard clusters or placing fault-tolerant general-purpose workloads on Spot VMs. Use a built-in ComputeClass for workloads that don't require specific hardware (such as GPUs) or specific node settings (such as Linux sysctl flags). If your workloads need more specialized hardware, use a custom ComputeClass.

Available built-in ComputeClasses in GKE

The following table describes the built-in ComputeClasses that are available in GKE:

Built-in ComputeClasses
autopilot

Create on-demand nodes that use the Autopilot container-optimized compute platform. This ComputeClass is the default for Autopilot clusters in any GKE version, but is available for explicit selection in specific GKE versions.

This is an Autopilot ComputeClass, which means that GKE manages the nodes for you even in Standard clusters. You can use this ComputeClass to run Autopilot mode workloads in Standard clusters.

Available in Autopilot clusters and Standard clusters that are enrolled in the Rapid release channel and run GKE version 1.33.1-gke.1107000 or later.

autopilot-spot

Create Spot VMs that use the Autopilot container-optimized compute platform. This ComputeClass is applied by default to any Pods in Autopilot clusters that explicitly select Spot VMs in the Pod specification.

This is an Autopilot ComputeClass, which means that GKE manages the nodes for you even in Standard clusters. You can use this ComputeClass to run Autopilot mode workloads in Standard clusters.

Available in Autopilot clusters and Standard clusters that are enrolled in the Rapid release channel and run GKE version 1.33.1-gke.1107000 or later.

Pricing

Autopilot bills you differently depending on the ComputeClass that your Pods request. For more information, see Google Kubernetes Engine pricing.

Built-in ComputeClass selection in workloads

To select a built-in or custom ComputeClass when you deploy a GKE workload, you select the cloud.google.com/compute-class label in your workload manifest, like in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloweb
  labels:
    app: hello
spec:
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      nodeSelector:
        # Replace with the name of a compute class
        cloud.google.com/compute-class: COMPUTE_CLASS 
      containers:
      - name: hello-app
        image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
        ports:
        - containerPort: 8080
        resources:
          requests:
            cpu: "250m"
            memory: "4Gi"

In this example, COMPUTE_CLASS is the name of a compute class. You can't select more that one ComputeClass in a specific workload.

When you deploy a workload that selects a ComputeClass, GKE uses the properties of that ComputeClass to create new nodes to run the Pods. For example, if you select the autopilot built-in ComputeClass in a workload, GKE runs those Pods in Autopilot mode.

Default application of built-in ComputeClasses

You can set any ComputeClass in a cluster as the default ComputeClass for a specific namespace. GKE applies that default class to any Pods that don't explicitly select a ComputeClass.

For example, consider a Standard cluster that runs many general-purpose web server Pods in a serving namespace. If you set the autopilot built-in ComputeClass as the default for the namespace, your web server Pods run on the Autopilot container-optimized compute platform by default, with no changes needed to the workload specifications. Any workloads in that namespace that need different hardware can add a selector for a different ComputeClass.

For more information about setting a ComputeClass as the default in a namespace, see Configure a default ComputeClass for a namespace.

What's next