Optimizing IP address allocation

This page explains how you can configure the maximum number of Pods that can run on a node for Standard clusters. This value determines the size of the IP address ranges that are assigned to nodes on Google Kubernetes Engine. The Pods that run on a node are allocated IP addresses from the node's Pod CIDR range.

The steps on this page do not apply to Autopilot clusters because the maximum number of nodes is pre-configured and immutable.

Overview

By default, GKE configures nodes on Standard clusters to run no more than 110 Pods. Autopilot clusters have a maximum of 32 Pods per node. Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.

CIDR ranges for Standard clusters

With the default maximum of 110 Pods per node for Standard clusters, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having approximately twice as many available IP addresses as the number of pods that can be created on a node, Kubernetes is able to mitigate IP address reuse as Pods are added to and removed from a node.

Although having 110 Pods per node is a hard limit, you can reduce the number of Pods per node. If you reduce the number from the default value, Kubernetes assigns the node a correspondingly smaller CIDR block. The block always contains at least twice as many addresses as the maximum number of Pods per node. The table below lists the size of the CIDR block that Kubernetes assigns to each node based on the maximum Pods per node:

Maximum Pods per Node CIDR Range per Node
8 /28
9 – 16 /27
17 – 32 /26
33 – 64 /25
65 – 110 /24

When configuring the maximum number of Pods per node, you are indirectly configuring how much IP address space is required by each cluster node. For example, if you set the maximum Pods per node to 30 then, per the table above, a /26 CIDR range is used, and each Node is assigned 64 IP addresses. If you do not configure the maximum number of Pods per node, a /24 CIDR range is used, and each node is assigned 256 IP addresses.

CIDR settings for Autopilot clusters

The default settings for Autopilot cluster CIDR sizes are:

  • Subnetwork range: /23
  • Cluster IP4 CIDR (range for Pods): /17
  • Services IP4 CIDR (range for Services): /22

Reducing the maximum number of Pods

Reducing the maximum number of Pods per node allows the cluster to have more nodes, since each node requires a smaller part of the total IP address space. Alternatively, you could support the same number of nodes in the cluster by specifying a smaller IP address space for Pods at cluster creation time.

Reducing the maximum number of Pods per node also lets you create smaller clusters that require fewer IP addresses. For example, with eight Pods per node, each node is granted a /28 CIDR. These Pod IP address ranges plus the user defined subnet and secondary ranges determine the number of IP addresses required to create a cluster successfully.

You can configure the maximum number of Pods per node at cluster creation time and at node pool creation time.

Restrictions

  • You can configure the maximum Pods per node in VPC-native clusters only.
  • Node creation is limited by the number of available addresses in the Pod address range. Refer to this table for the default, minimum, and maximum Pod address range sizes. You can also add additional Pod IP addresses using discontiguous multi-Pod CIDR.
  • Each cluster needs to create kube-system Pods, such as kube-proxy, in the kube-system namespace. Remember to account for both your Pods and System Pods when you reduce the maximum number of Pods per node. You can list System Pods in your cluster by using the following command:

    kubectl get pods --namespace kube-system
    

Configuring maximum Pods per node

You can configure the maximum number of Pods per node when creating a cluster or when creating a node pool. You cannot change this setting after the cluster or node pool is created.

However, if you run out of Pod IP addresses, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR.

You can set the size of the Pod address range when creating a cluster by using the gcloud tool or the Google Cloud Console.

Creating a cluster with a maximum of 110 Pods per node

gcloud

gcloud container clusters create cluster-name \
  --enable-ip-alias --cluster-ipv4-cidr=10.0.0.0/21 \
  --create-subnetwork=name='cluster-name-subnet',range=10.4.32.0/27 \
  --services-ipv4-cidr=10.4.0.0/19 --default-max-pods-per-node=110 \
  --zone=compute-zone

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create.

  3. Configure your cluster as desired.

  4. From the navigation pane, under Cluster, click Networking.

  5. Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.

  6. From the navigation pane, under Node pools, click Nodes.

  7. Set the Maximum pods per node field to 110. GKE uses this value to tune the size of the IP address range assigned to nodes.

  8. Click Create.

This creates a cluster that can contain up to eight nodes. Based on the max Pods per node, Kubernetes grants each node a /24 CIDR range for use by the node's Pods. Because this cluster assigns Pod IP addresses from a /21 CIDR range (cluster-ipv4-cidr), there can be up to eight nodes (24-21 = 3, 23 = 8). The default-max-pods-per-node option could be omitted because 110 is the default value.

Creating a cluster with a maximum of 8 Pods per node

gcloud

gcloud container clusters create cluster-name \
  --enable-ip-alias --cluster-ipv4-cidr=10.0.0.0/21 \
  --create-subnetwork=name='cluster-name-subnet',range=10.4.32.0/21 \
  --services-ipv4-cidr=10.4.0.0/19 --default-max-pods-per-node=8 \
  --zone=compute-zone

Console

  1. Visit the Google Kubernetes Engine menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create.

  3. Configure your cluster as desired.

  4. From the navigation pane, under Cluster, click Networking.

  5. Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.

  6. From the navigation pane, under Node pools, click Nodes.

  7. Set the Maximum pods per node field to 8. GKE uses this value to tune the size of the IP address range assigned to nodes.

  8. Click Create.

This creates a cluster that can contain up to 128 nodes. Based on the max Pods per node, Kubernetes grants each node a /28 CIDR range for use by the node's Pods. Because the range available for all Pods (cluster-ipv4-cidr) is a /21 range, this means that there can be up to 128 nodes (28-21 = 7, 27 = 128).

Setting the maximum number of Pods in a new node pool for an existing cluster

You can also specify the maximum number of Pods per node when creating a node pool in an existing cluster. Creating a new node pool lets you optimize IP address allocation, even in existing clusters where default-max-pods-per-node wasn't configured at the cluster level.

gcloud

gcloud container node-pools create pool-name \
  --cluster=cluster-name \
  --max-pods-per-node=30

Console

  1. Visit the GKE menu in Cloud Console.

    Visit the Google Kubernetes Engine menu

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Click Add Node Pool.

  4. From the navigation pane, click Nodes.

  5. Under Networking, enter a value for the Maximum Pods per node field. GKE uses this value to tune the size of the IP address range assigned to nodes.

This value overrides the default-max-pods-per-node option which is applied at the cluster level. If you omit the max-pods-per-node option when creating a node pool, the cluster-level default configuration is used.

What's next