Adding Pod IP address ranges


This page shows you how to enable discontiguous multi-Pod CIDR in VPC-native Google Kubernetes Engine (GKE) clusters.

Overview

With discontiguous multi-Pod CIDR, you can add new or existing secondary Pod IP address ranges to GKE clusters.

When you create a new node pool, by default the node pool uses the cluster's default Pod IP address range, also known as the cluster CIDR. With this feature, you can specify a Pod IP address range during node pool creation and the node pool uses that range instead of the cluster's default Pod IP address range.

The following diagram shows a user-managed cluster with a /24 CIDR block as a secondary Pod IP address range (256 IP addresses) and two nodes that use /25 CIDR blocks for Pod IP addresses (128 IP addresses each). The secondary Pod IP address range is exhausted and you cannot add another node to the cluster. Instead of deleting and re-creating the cluster you can use discontiguous multi-Pod CIDR to create a node pool. The cluster expands to include a third node that uses a /28 CIDR block for Pod IP addresses.

Adding a node pool to a cluster with an exhausted secondary Pod IP
          address range using discontiguous multi-Pod CIDR
Diagram: Using discontiguous multi-Pod CIDR

Benefits

  • You don't have to plan for future growth before creating clusters, which results in more efficient IP allocation.
  • You can fit clusters into fragmented IP address spaces.
  • You can reallocate IP addresses in response to changing business needs.

Limitations

  • Discontiguous multi-Pod CIDR is available in VPC-native clusters only.
  • All node pools in the cluster must be on versions 1.19.8-gke.1000 to 1.20 or 1.20.4-gke.500 and later.
  • Discontiguous multi-Pod CIDR requires Cloud SDK version 330 or later.
  • You cannot change a node pool's secondary Pod IP range once created. However, you can create a node pool with a new range and rotate workloads to the new node pool.
  • Discontiguous multi-Pod CIDR cannot be used with multi-cluster Services.

Caveats

  • If you use ip-masq-agent configured with the nonMasqueradeCIDRs parameter, you must update the nonMasqueradeCIDRs to include all Pod CIDR ranges.
  • If you use NetworkPolicy configured with ipBlock to specify traffic, you must update the cidr value to include all Pod CIDR ranges.

Modified firewall rule

When GKE creates a cluster, it creates a firewall rule to enable Pod-to-Pod communication, gke-[cluster-name]-[cluster-hash]-all.

When you create or delete a node pool with discontiguous multi-Pod CIDR enabled, GKE updates the source value of this firewall rule to all CIDRs used by the cluster for Pod IPs.

Before you begin

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with Autopilot or regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update

Creating a node pool with a new secondary Pod IP range

In this section, you create a node pool with a secondary Pod IP address range.

You can use the gcloud command-line tool or the GKE API.

gcloud

gcloud beta container node-pools create POOL_NAME \
  --cluster CLUSTER_NAME \
  --create-pod-ipv4-range name=RANGE_NAME,range=RANGE

Replace the following:

  • POOL_NAME: the name of the new node pool.
  • CLUSTER_NAME: the name of the cluster.
  • RANGE_NAME: an optional name of the new secondary Pod IP address range.
  • RANGE: an optional Pod IP address range provided as either a netmask (/20) or CIDR range (10.12.4.0/20). If you provide a netmask, GKE allocates a range from the available ranges in the cluster network. If you don't provide a value for the range, GKE automatically allocates a /14 netmask, the default size for the subnet's secondary IP range for Pods.

API

"nodePool": {
  "name": "POOL_NAME",
  ...
  "networkConfig": {
    "createPodRange": true,
    "podRange": "RANGE_NAME",
    "podIpv4CidrBlock": "RANGE"
  }
}

Replace the following:

  • POOL_NAME: the name of the new node pool.
  • RANGE_NAME: an optional name of the new secondary Pod IP address range.
  • RANGE: an optional Pod IP address range provided as either a netmask (/20) or CIDR range (10.12.0.0/20). If a netmask is specified, the IP range is automatically allocated from the free space in the cluster's network. If no value is provided, GKE automatically allocates a /14 netmask, the default size for the subnet's secondary IP range for Pods.

Creating a node pool using an existing secondary Pod IP range

In this section, you create a node pool with an existing secondary Pod IP address range.

You can use the gcloud tool or the GKE API.

gcloud

gcloud beta container node-pools create POOL_NAME \
  --cluster CLUSTER_NAME \
  --pod-ipv4-range RANGE_NAME

Replace the following:

  • POOL_NAME: the name of the new node pool.
  • CLUSTER_NAME: the name of the cluster.
  • RANGE_NAME: the name of an existing secondary Pod IP address range in the cluster's subnetwork.

API

"nodePool": {
  "name": "POOL_NAME",
  ...
  "networkConfig": {
    "podRange": "RANGE_NAME"
  }
}

Replace the following:

  • POOL_NAME: the name of the new node pool.
  • RANGE_NAME: the name of an existing secondary Pod IP address range in the cluster's subnetwork.

Verifying the Pod CIDR block for a node pool

To determine which Pod CIDR block is used for Pods in a given node pool, use the following command:

gcloud beta container node-pools describe POOL_NAME \
  --cluster CLUSTER_NAME

The output is similar to the following:

...
networkConfig:
  podIpv4CidrBlock: 192.168.0.0/18
  podRange: podrange
...

If the node pool is using discontiguous multi-Pod CIDR, podRange and podIpv4CidrBlock display the configured values for this node pool.

If the node pool is not using discontiguous multi-Pod CIDR, podRange and podIpv4CidrBlock display the cluster's default values, clusterSecondaryRangeName and clusterIpv4CidrBlock from IPAllocationPolicy.

Troubleshooting

You can enable VPC Flow Logs to determine if packets are being sent to nodes correctly.

What's next