To scale your workloads on GKE on AWS, you can automatically scale out your AWSNodePools, or manually create and delete AWSNodePools to scale up or down.
GKE on AWS implements the Kubernetes Cluster Autoscaler. When demand on your nodes is high, Cluster Autoscaler adds nodes to the node pool. When demand is low, Cluster Autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs.
Cluster Autoscaler automatically resizes the number of nodes in a given node pool, based on the demands your workloads. You don't need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic.
If resources are deleted or moved when autoscaling your cluster, your workloads might experience transient disruption. For example, if your workload consists of a controller with a single replica, that replica's Pod might be rescheduled onto a different node if its current node is deleted. Before enabling Cluster Autoscaler in your AWSNodePool, design your workloads to tolerate potential disruption or ensure that critical Pods are not interrupted.
How Cluster Autoscaler Works
Cluster Autoscaler works on a per-node pool basis. When you create a node pool, you specify a minimum and maximum size for the node pool in the AWSNodePool Kubernetes resource.
You can disable Cluster Autoscaler by setting
spec.maxNodeCount on your AWSNodePools.
Cluster Autoscaler increases or decreases the size of the node pool automatically, based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:
If Pods are unschedulable because there are not enough nodes in the node pool, Cluster Autoscaler adds nodes, up to the maximum size of the node pool. If nodes are under-utilized, and all Pods could be scheduled even with fewer nodes in the node pool, Cluster Autoscaler removes nodes, down to the minimum size of the node pool. If the node cannot be drained gracefully after a timeout period (10 minutes), the node is forcibly terminated. The grace period is not configurable for GKE on AWS clusters.
If your Pods have requested too few resources (for example, if the defaults are insufficient), Cluster Autoscaler does not correct the situation. You can help ensure Cluster Autoscaler works as accurately as possible by creating adequate resource requests for all of your workloads.
Cluster Autoscaler makes the following assumptions when resizing a node pool:
All replicated Pods can be restarted on some other node, possibly causing a brief disruption. If your services do not tolerate disruption, we don't recommend using Cluster Autoscaler.
All nodes in a single node pool have the same set of labels.
If you have AWSNodePools with different instance types, Cluster Autoscaler considers the relative cost of launching new nodes, and attempts to expand the least expensive node pool.
Labels manually added after initial cluster or node pool creation are not tracked. Nodes created by Cluster Autoscaler are assigned labels specified with --node-labels at the time of node pool creation.
Resize a Node Pool
An AWSNodePool includes
maxNodeCount fields. These fields declare a minimum and
maximum number of worker nodes in the pool. You may edit these values before or
after the AWSNodePool is created.
Before you begin
Provision a cluster using the instructions in
Creating a user cluster. Have the
YAML file (for example,
cluster-0.yaml) that creates the cluster available.
Enabling automatic node pool scaling
anthos-gketo switch context to your management service.
cd anthos-aws anthos-gke aws management get-credentials
To enable the Cluster Autoscaler, you edit the manifest for your AWSNodePool. Edit the
cluster-0.yamlfile and find the AWSNodePool section. Change the values for
The example below keeps the minimum size of this node pool at
3nodes, but enables the Cluster Autoscaler to increase its size to
apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: cluster-0-pool-0 spec: clusterName: cluster-0 version: 1.17.9-gke.2800 minNodeCount: 3 maxNodeCount: 10 ...
Next, apply the YAML to resize the node pool.
env HTTP_PROXY=http://localhost:8118 \ kubectl apply -f cluster-0.yaml
The AWSNodePool resource will transition into a
Resizingstate. When the AWSNodePool completes scaling, it moves to the
env HTTP_PROXY=http://localhost:8118 \ kubectl get AWSNodePools
The output shows that the AWSNodePool is
Resizing. Resize operations take several minutes to complete.
NAME CLUSTER STATE AGE VERSION cluster-0-pool-0 cluster-0 Resizing 3h 1.17.9-gke.2800
Manually creating a new AWSNodePool
You can also scale your cluster by creating new AWSNodePools. When you create a new AWSNodePool, you can also scale up or down instance sizes.
You can retrieve the configuration manifest of an existing AWSNodePool with
env HTTP_PROXY=http://localhost:8118 \ kubectl get awsnodepool cluster-0-pool-0 -o yaml > new_nodepool.yaml
new_nodepool.yaml, removing the sections that are not present in the following example. Save the file.
apiVersion: multicloud.cluster.gke.io/v1 kind: AWSNodePool metadata: name: NODE_POOL_NAME spec: clusterName: AWSCLUSTER_NAME version: GKE_VERSION # latest version is 1.17.9-gke.2800 region: AWS_REGION subnetID: AWS_SUBNET_ID minNodeCount: MINIMUM_NODE_COUNT maxNodeCount: MAXIMUM_NODE_COUNT maxPodsPerNode: MAXIMUM_PODS_PER_NODE_COUNT instanceType: AWS_NODE_TYPE keyName: KMS_KEY_PAIR_NAME iamInstanceProfile: NODE_IAM_PROFILE rootVolume: sizeGiB: ROOT_VOLUME_SIZE
To create a new AWSNodePool, apply the manifest to your management cluster.
env HTTP_PROXY=http://localhost:8118 \ kubectl apply -f new_nodepool.yaml
Deploy a service backed by Ingress.
Read the reference documentation on AWSNodePool to see the options you can change.