This page explains how you can configure the maximum number of Pods that can run on a node for Standard clusters. This value determines the size of the IP address ranges that are assigned to nodes on Google Kubernetes Engine (GKE). The Pods that run on a node are allocated IP addresses from the node's assigned CIDR range.
The steps on this page do not apply to Autopilot clusters because the maximum number of nodes is pre-configured and immutable.
By default, GKE allows up to 110 Pods per node on Standard clusters. Autopilot clusters have a maximum of 32 Pods per node. Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.
CIDR ranges for Standard clusters
With the default maximum of 110 Pods per node for Standard clusters, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having more than twice as many available IP addresses as the maximum number of Pods that can be created on a node, Kubernetes can reduce IP address reuse as Pods are added to and removed from a node.
Although having 110 Pods per node is a hard limit, you can reduce the number of Pods on a node. If you reduce the number from the default value, Kubernetes assigns the node a correspondingly smaller CIDR block. The block always contains at least twice as many addresses as the maximum number of Pods per node.
The following table lists the size of the CIDR block and the corresponding number of available IP addresses that Kubernetes assigns to nodes based on the maximum Pods per node:
|Maximum Pods per Node||CIDR Range per Node||Number of IP addresses|
|9 – 16||/27||32|
|17 – 32||/26||64|
|33 – 64||/25||128|
|65 – 110||/24||256|
CIDR settings for Autopilot clusters
The default settings for Autopilot cluster CIDR sizes are as follows:
- Subnetwork range: /23
- Cluster IPv4 CIDR (range for Pods): /17
- Services IPv4 CIDR (range for Services): /22
Autopilot has a maximum Pods per node of 32.
As with GKE Standard, this results in a
being provisioned per node, that is, 64 IPs. A Pod address range of
results in a cluster than can support at most 511 nodes (32,766 usable IPs / 64
IP addresses per node).
Ensure the Pod address CIDR range that you specify is large enough to support your
anticipated maximum cluster size. A range of
/16 (for example,
is recommended to support maximum growth of the cluster.
Reducing the maximum number of Pods
Reducing the maximum number of Pods per node allows the cluster to have more nodes, since each node requires a smaller part of the total IP address space. Alternatively, you could support the same number of nodes in the cluster by specifying a smaller IP address space for Pods at cluster creation time.
Reducing the maximum number of Pods per node also lets you create smaller clusters that require fewer IP addresses. For example, with eight Pods per node, each node is granted a /28 CIDR. These IP address ranges plus the subnet and secondary ranges that you define determine the number of IP addresses required to create a cluster successfully.
You can configure the maximum number of Pods per node at cluster creation time and at node pool creation time.
- You can only configure the maximum Pods per node in VPC-native clusters.
- Node creation is limited by the number of available addresses in the Pod address range. Check the IP address range planning table for the default, minimum, and maximum Pod address range sizes. You can also add additional Pod IP addresses using discontiguous multi-Pod CIDR.
Each cluster needs to create kube-system Pods, such as kube-proxy, in the
kube-systemnamespace. Remember to account for both your workload Pods and System Pods when you reduce the maximum number of Pods per node. To list System Pods in your cluster, run the following command:
kubectl get pods --namespace kube-system
Configuring maximum Pods per node
You can configure the maximum number of Pods per node when creating a cluster or when creating a node pool. You cannot change this setting after the cluster or node pool is created.
However, if you run out of Pod IP addresses, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR (Preview).
You can set the size of the Pod address range when creating a cluster by using
gcloud tool or the Google Cloud Console.
To set the default maximum Pods per node using the
gcloud tool, run
the following command:
gcloud container clusters create CLUSTER_NAME \ --enable-ip-alias \ --cluster-ipv4-cidr 10.0.0.0/21 \ --services-ipv4-cidr 10.4.0.0/19 \ --create-subnetwork name='SUBNET_NAME',range=10.4.32.0/27 \ --default-max-pods-per-node MAXIMUM_PODS \ --zone COMPUTE_ZONE
Replace the following:
CLUSTER_NAME: the name of your new cluster.
SUBNET_NAME: the name of the new subnetwork for your cluster.
MAXIMUM_PODS: the default maximum number of Pods per node for your cluster. If omitted, Kubernetes assigns the default value of
COMPUTE_ZONE: the compute zone for your new cluster.
Go to the Google Kubernetes Engine page in the Cloud Console.
Click add_box Create.
Configure your new cluster.
From the navigation pane, under Cluster, click Networking.
Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.
From the navigation pane, under Node pools, click Nodes.
Set the Maximum pods per node field to
110. GKE uses this value to tune the size of the IP address range assigned to nodes.
When you configure the maximum number of Pods per node for the cluster, Kubernetes uses this value to allocate a CIDR range for the nodes. You can calculate the maximum number of nodes on the cluster based on the cluster's IPv4 CIDR range for Pods and the allocated CIDR range for the node.
For example, if you set the default maximum number of Pods to
110 and the
cluster's IPv4 CIDR range for Pods to
/21, Kubernetes assigns a /24 CIDR range
to nodes on the cluster. This allows a maximum of
2(24-21) = 23 = 8
nodes on the cluster.
Similarly, if you set the default maximum Pods to
8 and the cluster's
IPv4 CIDR range for Pods to
/21, Kubernetes assigns a /28 CIDR range to nodes. This
allows a maximum of
2(28-21) = 27 = 128 nodes on
Setting the maximum number of Pods in a new node pool for an existing cluster
You can also specify the maximum number of Pods per node when creating a node pool in an existing cluster. Creating a new node pool lets you optimize IP address allocation, even in existing clusters where there is no configured default maximum number of Pods per node at the cluster level.
Setting the maximum number of Pods at the node pool level overrides the cluster-level default maximum. If you do not configure a maximum number of Pods per node when you create the node pool, the cluster-level maximum applies.
gcloud container node-pools create POOL_NAME \ --cluster CLUSTER_NAME \ --max-pods-per-node MAXIMUM_PODS
Replace the following:
POOL_NAME: the name of your new node pool.
CLUSTER_NAME: the name of the cluster in which you want to create the node pool.
MAXIMUM_PODS: the maximum number of Pods in the node pool.
Go to the Google Kubernetes Engine page in Cloud Console.
In the cluster list, click the name of the cluster you want to modify.
Click add_box Add Node Pool.
From the navigation pane, click Nodes.
Under Networking, enter a value for the Maximum Pods per node field. GKE uses this value to tune the size of the IP address range assigned to nodes.