To scale your workloads on GKE on AWS, you can configure your AWSNodePools to automatically scale out, or manually create and delete AWSNodePools to scale up or down.
Cluster Autoscaler
GKE on AWS implements the Kubernetes Cluster Autoscaler. When demand on your nodes is high, Cluster Autoscaler adds nodes to the node pool. When demand is low, Cluster Autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it while controlling costs.
Overview
Cluster Autoscaler automatically resizes the number of nodes in a given node pool, based on the demands your workloads. You don't need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the cluster scales automatically.
If resources are deleted or moved when autoscaling your cluster, your workloads might experience transient disruption. For example, if your workload consists of a single replica, that replica's Pod might be rescheduled onto a different node if its current node is deleted. Before enabling Cluster Autoscaler in your AWSNodePool, design your workloads to tolerate potential disruption or ensure that critical Pods are not interrupted.
How Cluster Autoscaler Works
Cluster Autoscaler works on a per-node pool basis. When you create a node pool, you specify a minimum and maximum size for the node pool in the AWSNodePool Kubernetes resource.
You can disable Cluster Autoscaler by setting spec.minNodeCount
to equal spec.maxNodeCount
on your AWSNodePools.
Cluster Autoscaler increases or decreases the size of the node pool automatically, based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:
If Pods are unschedulable because there are not enough nodes in the node pool, Cluster Autoscaler adds nodes, up to the maximum size of the node pool. If nodes are under-utilized, and all Pods could be scheduled even with fewer nodes in the node pool, Cluster Autoscaler removes nodes, down to the minimum size of the node pool. If the node cannot be drained gracefully after a timeout period (10 minutes), the node is forcibly terminated. The grace period is not configurable.
If your Pods have requested too few resources (for example, if the defaults are insufficient), Cluster Autoscaler does not correct the situation. You can help ensure Cluster Autoscaler works as accurately as possible by creating adequate resource requests for all of your workloads.
Operating Criteria
Cluster Autoscaler makes the following assumptions when resizing a node pool:
All replicated Pods can be restarted on some other node, possibly causing a brief disruption. If your services do not tolerate disruption, we don't recommend using Cluster Autoscaler.
All nodes in a single node pool have the same set of labels.
If you have AWSNodePools with different instance types, Cluster Autoscaler considers the relative cost of launching new nodes, and attempts to expand the least expensive node pool.
Labels manually added after initial cluster or node pool creation are not tracked. Nodes created by Cluster Autoscaler are assigned labels specified with --node-labels at the time of node pool creation.
Resize a Node Pool
An AWSNodePool includes
minNodeCount
and maxNodeCount
fields. These fields declare a minimum and
maximum number of worker nodes in the pool. You may edit these values before or
after the AWSNodePool is created.
Before you begin
Provision a cluster using the instructions in
Creating a user cluster. Have the
YAML file (for example, cluster-0.yaml
) that creates the cluster available.
Enabling automatic node pool scaling
From your
anthos-aws
directory, useanthos-gke
to switch context to your management service.cd anthos-aws anthos-gke aws management get-credentials
To enable the Cluster Autoscaler, you edit the manifest for your AWSNodePool. Edit the
cluster-0.yaml
file and find the AWSNodePool section. Change the values forspec.minNodeCount
andspec.maxNodeCount
.The example below keeps the minimum size of this node pool at
3
nodes, but enables the Cluster Autoscaler to increase its size to10
nodes.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: cluster-0-pool-0 spec: clusterName: cluster-0 version: 1.25.5-gke.2100 minNodeCount: 3 maxNodeCount: 10 ...
Next, apply the YAML to resize the node pool.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f cluster-0.yaml
The AWSNodePool resource will transition into a
Resizing
state. When the AWSNodePool completes scaling, it moves to theProvisioned
state.env HTTPS_PROXY=http://localhost:8118 \ kubectl get AWSNodePools
The output shows that the AWSNodePool is
Resizing
. Resize operations take several minutes to complete.NAME CLUSTER STATE AGE VERSION cluster-0-pool-0 cluster-0 Resizing 3h 1.25.5-gke.2100
Manually creating a new AWSNodePool
You can also scale your cluster by creating new AWSNodePools. When you create a new AWSNodePool, you can also scale up or down instance sizes.
You can retrieve the configuration manifest of an existing AWSNodePool with
kubectl get
.env HTTPS_PROXY=http://localhost:8118 \ kubectl get awsnodepool cluster-0-pool-0 -o yaml > new_nodepool.yaml
Edit
new_nodepool.yaml
and remove the sections that are not present in the following example. Save the file.apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: NODE_POOL_NAME spec: clusterName: AWSCLUSTER_NAME version: CLUSTER_VERSION # latest version is 1.25.5-gke.2100 region: AWS_REGION subnetID: AWS_SUBNET_ID minNodeCount: MINIMUM_NODE_COUNT maxNodeCount: MAXIMUM_NODE_COUNT maxPodsPerNode: MAXIMUM_PODS_PER_NODE_COUNT instanceType: AWS_NODE_TYPE keyName: KMS_KEY_PAIR_NAME iamInstanceProfile: NODE_IAM_PROFILE proxySecretName: PROXY_SECRET_NAME rootVolume: sizeGiB: ROOT_VOLUME_SIZE volumeType: VOLUME_TYPE # Optional iops: IOPS # Optional kmsKeyARN: NODE_VOLUME_KEY # Optional
To create a new AWSNodePool, apply the manifest to your management cluster.
env HTTPS_PROXY=http://localhost:8118 \ kubectl apply -f new_nodepool.yaml
What's next
Deploy a service backed by Ingress.
To see additional options, read the reference documentation on AWSNodePool.