This documentation is for the most recent version of Anthos clusters on AWS, released on November 3rd. See the Release notes for more information. For documentation on the previous generation of Anthos clusters on AWS, see Previous generation.

Run Arm workloads in Anthos clusters on AWS

Stay organized with collections Save and categorize content based on your preferences.

Anthos clusters on AWS lets you run Arm workloads that are backed by Arm-based AWS Graviton processors. Graviton processors are designed to offer the best price performance for a wide variety of workloads as well as offering strong security for your applications.

This page explains how to create an Arm node pool, provides an overview of why multi-arch images are the recommended way to deploy Arm workloads, and explains how to schedule Arm workloads.

Create an Arm node pool

Before you begin

Before you create node pools for your Arm workloads, you need the following resources:

  • The name of an existing AWS cluster to create the node pool in. This cluster must use a Kubernetes version of 1.24 or later.
  • An IAM instance profile for the node pool VMs.
  • A subnet where the node pool VMs will run.
  • At least one Linux x86 node. Because all nodes in a node pool must have the same configuration, including the same operating system, a cluster with an ARM node pool must have a second node pool with at least one Linux x86 node. This node is used to run cluster infrastructure components such as the Connect Agent. For details on how to create a node pool in Anthos clusters on AWS, see Create a node pool.

Create an Arm node pool

Anthos clusters on AWS supports node pools built on the Canonical Ubuntu arm64 minimal node image and containerd runtime.

To create an Arm node pool and add it to an existing cluster, run the following command:

gcloud container aws node-pools create NODE_POOL_NAME \
    --cluster CLUSTER_NAME \
    --instance-type INSTANCE_TYPE \
    --root-volume-size ROOT_VOLUME_SIZE \
    --iam-instance-profile NODEPOOL_PROFILE \
    --node-version NODE_VERSION \
    --min-nodes MIN_NODES \
    --max-nodes MAX_NODES \
    --max-pods-per-node MAX_PODS_PER_NODE \
    --location GOOGLE_CLOUD_LOCATION \
    --subnet-id NODEPOOL_SUBNET \
    --ssh-ec2-key-pair SSH_KEY_PAIR_NAME \
    --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN

Replace the following:

  • NODE_POOL_NAME: the name you choose for your node pool
  • CLUSTER_NAME: the name of the cluster to attach the node pool to
  • INSTANCE_TYPE: one of the following instance types:

    • m6g
    • m6gd
    • t4g
    • r6g
    • r6gd

      These instance types are powered by Arm-based AWS Graviton processors. You also need to specify the instance size that you want. For example, m6g.medium. For a complete list, see Supported AWS instance types.

  • ROOT_VOLUME_SIZE: the desired size for each node's root volume, in Gb

  • NODEPOOL_PROFILE: the IAM instance profile for node pool VMs

  • NODE_VERSION: the Kubernetes version to install on each node in the node pool, which must be version 1.24 or later. For example, 1.24.3-gke.200.

  • MIN_NODES: the minimum number of nodes the node pool can contain

  • MAX_NODES: the maximum number of nodes the node pool can contain

  • MAX_PODS_PER_NODE: the maximum number of Pods that can be created on any single node in the pool

  • GOOGLE_CLOUD_LOCATION: the name of the Google Cloud

    location from which this node pool will be managed

  • NODEPOOL_SUBNET: the ID of the subnet the node pool will run on. If this subnet is outside of the VPC's primary CIDR block, you need to take additional steps. For more information, see security groups.

  • SSH_KEY_PAIR_NAME: the name of the AWS SSH key pair created for SSH access (optional)

  • CONFIG_KMS_KEY_ARN: the Amazon Resource Name (ARN) of the AWS KMS key that encrypts user data

Understand multi-architecture images

Container images must be compatible with the architecture of the node where you intend to run the Arm workloads. To ensure that your container image is Arm-compatible, we recommend that you use multi-architecture ("multi-arch") images.

A multi-arch image is an image that can support multiple architectures. It looks like a single image with a single tag, but contains a set of images to run on different machine architectures. Multi-arch images are compatible with the Docker Image Manifest V2 Scheme 2 or OCI Image Index Specifications.

When you deploy a multi-arch image to a cluster, the container runtime automatically chooses the image that is compatible with the architecture of the node you're deploying to. Once you have a multi-arch image for a workload, you can deploy this workload across multiple architectures.

To learn more about how to use multi-arch images with Arm workloads, see Build multi-architecture images for Arm workloads in the Google Kubernetes Engine (GKE) documentation.

Schedule Arm workloads

By default, Anthos Multi-Cloud only schedules workloads to x86 Linux nodes. To prevent x86-compatible workloads from being inadvertently scheduled to Arm nodes, in the preview release of this feature, Anthos Multi-Cloud automatically adds the following taint to all Arm nodes:

kubernetes.io/arch=arm64:NoSchedule

To send the workload to the type of node that you want, use node selectors and tolerations.

Deploy workload only to Arm nodes

To deploy a workload only to an Arm node, add the following node selector and toleration to the workload specification:

nodeSelector:
    kubernetes.io/arch: arm64
tolerations:
  - key: kubernetes.io/arch
    operator: Equal
    value: arm64
    effect: NoSchedule

The node selector specifies that this workload should only be scheduled to nodes with the arm64 label. All Arm nodes on Anthos clusters on AWS have this label. The toleration matches the taint to permit the workload to be scheduled on Arm nodes.

Deploy workload across multiple architectures

To deploy a multi-arch workload to either an Arm or an x86 node, add the matching toleration to the specification of your workload:

tolerations:
- key: kubernetes.io/arch
  operator: Equal
  value: arm64
  effect: NoSchedule

What's next