This page explains how you can configure the maximum number of Pods that can run on a node for Standard clusters. This value determines the size of the IP address ranges that are assigned to nodes on Google Kubernetes Engine (GKE). The Pods that run on a node are allocated IP addresses from the node's assigned CIDR range.
When scheduling, GKE uses the maximum number of Pods per node to
determine if there is sufficient capacity to schedule a Pod. Only Pods that have
been assigned to a node, and are not yet terminated
(Failed
or Succeeded
phase),
are counted against this capacity.
The steps on this page do not apply to Autopilot clusters because the maximum number of nodes is pre-configured and immutable.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Restrictions
- You can only configure the maximum Pods per node in VPC-native clusters.
- Node creation is limited by the number of available addresses in the Pod address range. Check the IP address range planning table for the default, minimum, and maximum Pod address range sizes. You can also add additional Pod IP addresses using discontiguous multi-Pod CIDR.
Each cluster needs to create kube-system Pods, such as kube-proxy, in the
kube-system
namespace. Remember to account for both your workload Pods and System Pods when you reduce the maximum number of Pods per node. To list System Pods in your cluster, run the following command:kubectl get pods --namespace kube-system
Configure the maximum Pods per node
You can configure the maximum number of Pods per node when creating a cluster or when creating a node pool. You cannot change this setting after the cluster or node pool is created.
However, if you run out of Pod IP addresses, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR.
You can set the size of the Pod address range when creating a cluster by using the gcloud CLI or the Google Cloud console.
gcloud
To set the default maximum Pods per node using the gcloud CLI, run the following command:
gcloud container clusters create CLUSTER_NAME \
--enable-ip-alias \
--cluster-ipv4-cidr=10.0.0.0/21 \
--services-ipv4-cidr=10.4.0.0/19 \
--create-subnetwork=name='SUBNET_NAME',range=10.5.32.0/27 \
--default-max-pods-per-node=MAXIMUM_PODS \
--location=COMPUTE_LOCATION
Replace the following:
CLUSTER_NAME
: the name of your new cluster.SUBNET_NAME
: the name of the new subnetwork for your cluster.MAXIMUM_PODS
: the default maximum number of Pods per node for your cluster, can be configured up to256
. If omitted, Kubernetes assigns the default value of110
.COMPUTE_LOCATION
: the Compute Engine location for the new cluster.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure your new cluster.
From the navigation pane, under Cluster, click Networking.
Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.
From the navigation pane, under Node pools, click Nodes.
Set the Maximum pods per node field to
110
. GKE uses this value to tune the size of the IP address range assigned to nodes.Click Create.
When you configure the maximum number of Pods per node for the cluster, Kubernetes uses this value to allocate a CIDR range for the nodes. You can calculate the maximum number of nodes on the cluster based on the cluster's secondary IP address range for Pods and the allocated CIDR range for the node.
For example, if you set the default maximum number of Pods to 110
and the
secondary IP address range for Pods to /21
, Kubernetes assigns a /24
CIDR
range to nodes on the cluster. This allows a maximum of
2(24-21) = 23 = 8
nodes on the cluster.
Similarly, if you set the default maximum Pods to 8
and the cluster's
secondary IP address range for Pods to /21
, Kubernetes assigns a /28
CIDR
range to nodes. This allows a maximum of
2(28-21) = 27 = 128
nodes on the cluster.
Configure the maximum number of Pods in a new node pool for an existing cluster
You can also specify the maximum number of Pods per node when creating a node pool in an existing cluster. Creating a new node pool lets you optimize IP address allocation, even in existing clusters where there is no configured default maximum number of Pods per node at the cluster level.
Setting the maximum number of Pods at the node pool level overrides the cluster-level default maximum. If you do not configure a maximum number of Pods per node when you create the node pool, the cluster-level maximum applies.
gcloud
gcloud container node-pools create POOL_NAME \
--cluster=CLUSTER_NAME \
--max-pods-per-node=MAXIMUM_PODS
Replace the following:
POOL_NAME
: the name of your new node pool.CLUSTER_NAME
: the name of the cluster in which you want to create the node pool.MAXIMUM_PODS
: the maximum number of Pods in the node pool.
Console
Go to the Google Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Click add_box Add Node Pool.
From the navigation pane, click Nodes.
Under Networking, enter a value for the Maximum Pods per node field. GKE uses this value to tune the size of the IP address range assigned to nodes.
About default maximum Pods per node
By default, GKE allows up to 110 Pods per node on Standard clusters, however Standard clusters can be configured to allow up to 256 Pods per node. Autopilot clusters, based on the on the expected workload Pod density, choose the maximum Pods per node from a range between 8 and 256. Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.
Pod CIDR ranges in Standard clusters
With the default maximum of 110 Pods per node for Standard clusters, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having more than twice as many available IP addresses as the maximum number of Pods that can be created on a node, Kubernetes can reduce IP address reuse as Pods are added to and removed from a node.
Although having 256 Pods per node is a hard limit, you can reduce the number of Pods on a node. The size of the CIDR block assigned to a node depends on the maximum Pods per node value. The block always contains at least twice as many addresses as the maximum number of Pods per node.
The following table lists the size of the CIDR block and the corresponding number of available IP addresses that Kubernetes assigns to nodes based on the maximum Pods per node:
Maximum Pods per Node | CIDR Range per Node | Number of IP addresses |
---|---|---|
8 | /28 | 16 |
9 – 16 | /27 | 32 |
17 – 32 | /26 | 64 |
33 – 64 | /25 | 128 |
65 – 128 | /24 | 256 |
129 - 256 | /23 | 512 |
Pod CIDR ranges in Autopilot clusters
In GKE Autopilot clusters, the maximum number of Pods per node and the associated CIDR block allocation are dynamic. This means they can vary based on the GKE version and the workload density.
GKE Autopilot Versions 1.28 and earlier: The maximum Pods per node is fixed at 32. This results in a /26 CIDR block (64 IP addresses) being assigned to each node.
GKE Autopilot Versions 1.28 and later: The maximum Pods per node is dynamic and can range from 8 to 256. The CIDR block size is adjusted accordingly to ensure each Pod has a unique IP address.
The dynamic nature of the maximum Pods per node in Autopilot clusters allows for efficient resource utilization. The cluster automatically adapts to the workload requirements, allocating the appropriate number of pods and IP addresses per node.
To accommodate your initial cluster size and the maximum Pods per node configuration, select an appropriate secondary IP address range for Pods. We recommend that you plan your IP addressing carefully. However, if you exhaust the IP addresses as the cluster scales, it prevents further scaling until more IP addresses are added. You can add additional secondary ranges later if required. For more information on how to add more IP address ranges after you have created the cluster, see Adding secondary ranges to a VPC network.
A range of /16 (for example, cluster-ipv4-cidr=240.0.0.0/16) is generally recommended to provide flexibility for growth and changes in Pod density within the cluster.
Consider the following points when planning your Autopilot cluster network configuration:
- Pod density: Consider the maximum number of Pods on your cluster that your workloads might require.
- CIDR range: Choose a secondary IP address range for Pods that can accommodate both the cluster size and Pod density requirements.
- Flexibility: A larger CIDR range like /16 provides more flexibility for future growth and changes in Pod density.
By carefully planning your CIDR range, you can help ensure that your Autopilot cluster can initially scale to meet your needs. However, if you do encounter IP address limitations as your cluster grows, you can add additional secondary ranges to support further scaling.
Reduce the maximum number of Pods
Reducing the maximum number of Pods per node allows the cluster to have more nodes, since each node requires a smaller part of the total IP address space. Alternatively, you could support the same number of nodes in the cluster by specifying a smaller IP address space for Pods at cluster creation time.
Reducing the maximum number of Pods per node also lets you create smaller clusters that require fewer IP addresses. For example, with eight Pods per node, each node is granted a /28 CIDR. These IP address ranges plus the subnet and secondary ranges that you define determine the number of IP addresses required to create a cluster successfully.
You can configure the maximum number of Pods per node at cluster creation time and at node pool creation time.
What's next
- Learn how to create VPC-native clusters.
- Learn how to add additional Pod IP addresses to clusters.
- Learn about IP address management strategies when migrating to GKE.
- Learn about GKE IP address utilization insights.