Anthos clusters on AWS lets you run Arm workloads built for Arm-based AWS Graviton processors.
ARM node pools running Kubernetes versions earlier than 1.24.8-gke.1300 automatically add a taint during node pool creation to prevent ARM workloads from being scheduled on non-ARM nodes. ARM node pools in clusters at version 1.24.8-gke.1300 or higher no longer add this taint. If you're upgrading from a cluster earlier than 1.24.8-gke.1300, you must create this taint yourself or otherwise take this into account when upgrading.
ARM node pools on Anthos clusters on AWS don't support Anthos Service Mesh or Anthos Config Management. You must run these products on an x86 node pool.
Clusters running Kubernetes version 1.24 need an x86 node pool to run the Connect Agent. If your cluster runs Kubernetes version 1.25 or later, you don't need an x86 node pool.
This page explains how to create an Arm node pool, why multi-architecture images are the recommended way to deploy Arm workloads, and how to schedule Arm workloads.
Before you begin
Before you create node pools for your Arm workloads, you need the following resources:
- An existing AWS cluster to create the node pool in. This cluster must run Kubernetes version 1.24 or later.
- An IAM instance profile for the node pool VMs.
- A subnet where the node pool VMs will run.
If your cluster is running Kubernetes version 1.24, an x86 node pool to run the Connect Agent.
For details on how to create a node pool in Anthos clusters on AWS, see Create a node pool.
Create an Arm node pool
Anthos clusters on AWS supports node pools built on the Canonical Ubuntu
arm64 minimal node image and
To create an Arm node pool and add it to an existing cluster, run the following command:
gcloud container aws node-pools create NODE_POOL_NAME \ --cluster CLUSTER_NAME \ --instance-type INSTANCE_TYPE \ --root-volume-size ROOT_VOLUME_SIZE \ --iam-instance-profile NODEPOOL_PROFILE \ --node-version NODE_VERSION \ --min-nodes MIN_NODES \ --max-nodes MAX_NODES \ --max-pods-per-node MAX_PODS_PER_NODE \ --location GOOGLE_CLOUD_LOCATION \ --subnet-id NODEPOOL_SUBNET \ --ssh-ec2-key-pair SSH_KEY_PAIR_NAME \ --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN
Replace the following:
NODE_POOL_NAME: the name you choose for your node pool
CLUSTER_NAME: the name of the cluster to attach the node pool to
INSTANCE_TYPE: one of the following instance types:
These instance types are powered by Arm-based AWS Graviton processors. You also need to specify the instance size that you want. For example,
m6g.medium. For a complete list, see Supported AWS instance types.
ROOT_VOLUME_SIZE: the desired size for each node's root volume, in Gb
NODEPOOL_PROFILE: the IAM instance profile for node pool VMs
NODE_VERSION: the Kubernetes version to install on each node in the node pool, which must be version 1.24 or later. For example,
MIN_NODES: the minimum number of nodes the node pool can contain
MAX_NODES: the maximum number of nodes the node pool can contain
MAX_PODS_PER_NODE: the maximum number of Pods that can be created on any single node in the pool
GOOGLE_CLOUD_LOCATION: the name of the Google Cloud
location from which this node pool will be managed
NODEPOOL_SUBNET: the ID of the subnet the node pool will run on. If this subnet is outside of the VPC's primary CIDR block, you need to take additional steps. For more information, see security groups.
SSH_KEY_PAIR_NAME: the name of the AWS SSH key pair created for SSH access (optional)
CONFIG_KMS_KEY_ARN: the Amazon Resource Name (ARN) of the AWS KMS key that encrypts user data
Understand multi-architecture images
Container images must be compatible with the architecture of the node where you intend to run the Arm workloads. To ensure that your container image is Arm-compatible, we recommend that you use multi-architecture ("multi-arch") images.
A multi-arch image is an image that can support multiple architectures. It looks like a single image with a single tag, but contains a set of images to run on different machine architectures. Multi-arch images are compatible with the Docker Image Manifest V2 Scheme 2 or OCI Image Index Specifications.
When you deploy a multi-arch image to a cluster, the container runtime automatically chooses the image that is compatible with the architecture of the node you're deploying to. Once you have a multi-arch image for a workload, you can deploy this workload across multiple architectures. Scheduling a single-architecture image onto an incompatible node causes an error at load time.
To learn more about how to use multi-arch images with Arm workloads, see Build multi-architecture images for Arm workloads in the Google Kubernetes Engine (GKE) documentation.