Creating a VPC-native cluster

This page explains how to configure VPC-native clusters in Google Kubernetes Engine.

Overview

In GKE, clusters can be distinguished according to the way they route traffic from one Pod to another Pod. A cluster that uses Alias IPs is called a VPC-native cluster. A cluster that uses Google Cloud Platform Routes is called a routes-based cluster.

Benefits of Alias IPs

Using Alias IP addresses has several benefits:

  • Pod IP addresses are natively routable within the GCP network (including via VPC Network Peering), and no longer use up route quota.
  • Pod IPs are reserved in the network before cluster creation, which prevents conflict with other compute resources.
  • Firewall controls for Pods can be applied separately from their nodes.
  • The networking layer can perform anti-spoofing checks to ensure that egress traffic is not sent with arbitrary source IP addresses.
  • Aliased IP addresses can be announced through BGP by Cloud Router.
  • Alias IP addresses allow Pods to directly access hosted services without using a NAT gateway.

Restrictions

  • You cannot migrate a VPC-native cluster to a routes-based cluster.

  • You cannot migrate an routes-based cluster to a VPC-native cluster.

  • You cannot use legacy networks with VPC-native clusters. To create a cluster in a legacy network, create a routes-based cluster.

  • Cluster IPs for internal Services are available only from within the cluster. If you want to access a Kubernetes Service from within the VPC, but from outside of the cluster (for example, from a Compute Engine instance), use an internal load balancer.

Before you begin

To prepare for this task, perform the following steps:

  • Ensure that you have enabled the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • Ensure that you have installed the Cloud SDK.
  • Set your default project ID:
    gcloud config set project [PROJECT_ID]
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone [COMPUTE_ZONE]
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region [COMPUTE_REGION]
  • Update gcloud to the latest version:
    gcloud components update

Secondary Ranges

VPC-native clusters use two secondary IP address ranges:

  • An address range for Pods
  • An address range for Services

In a VPC-native cluster, the allocation scheme is different from the scheme used in a routes-based cluster. In a routes-based cluster one CIDR range is allocated for both Pods and Services. In a VPC-native cluster, one CIDR range is allocated for Pods, and a separate CIDR range is allocated for Services.

Secondary Range Management

There are two ways to associate secondary ranges with your cluster:

Managed by GKE

You can have GKE manage the secondary ranges. This is the default mode used when you create a cluster.

When you create a cluster, you can specify the CIDR ranges for your Pods and Services or you can define sizes for your Pod and Service ranges. For example, you could specify a range of 10.0.0.0/16 or you could specify a size of /16.

User-managed

You can manually create the secondary ranges, and then create a cluster with those that uses those ranges. If you manually create secondary ranges, you must manage them yourself.

With Shared VPC, the owner of the host project creates the secondary ranges and passes them to tenants for use with their clusters in service projects. Users who create GKE clusters in service projects cannot create secondary ranges with their tenant credentials.

Considerations for Cluster Sizing

The maximum number of Pods and Services for a given GKE cluster is determined by the size of the cluster's secondary ranges. You cannot change secondary range sizes after you create a cluster. When you create a cluster, ensure that you choose secondary ranges large enough to accommodate the cluster's anticipated growth.

The following table outlines the three ranges you must consider: nodes, Pods and Services:

Range Guidance
Nodes

Node IPs are taken from the primary range of subnetwork associated with the cluster. Your cluster subnetwork must be large enough to fit the total number of nodes in your cluster.

For example, if you plan to create a 900-node cluster, the subnet used with the cluster must be at least a /22 in size. A subnet size /22 contains 2(32-22) = 210 = 1024 - 4 reserved IP addresses = 1020 IP addresses, which is sufficient for the 900 node IP addresses needed for the cluster.

Pods

Each node currently allocates a /24 (2(32-24) = 28 = 256) block of Pod IP addresses. These Pod IP addresses are taken from the associated secondary range for Pods. The Pod range as determined by the --cluster-ipv4-cidr or --cluster-secondary-range-name flags must be at least large enough to fit (total number of nodes × 256) IP addresses.

For example, for a 900-node cluster, you need 900 × 256 = 230,400 IP addresses. The IP addresses must come in /24-sized blocks, as that is the granularity assigned to a node. You need a secondary range of size /14 or larger. A /14 range of IP addresses results in 2(32-14) = 218 ≈ 262k IP addresses.

Services

Every cluster needs to reserve a range of IP addresses for Kubernetes Service cluster IP addresses. The Service IP addresses are assigned from the associated secondary range for Services. You must ensure that the block of IP addresses is sufficient for the total number of Services that you anticipate to run in the cluster. You define the ranges defined using the --services-ipv4-cidr or --services-secondary-range-name flags.

For example, for a cluster that runs at most 3000 Services, you need 3000 IP addresses to be used for cluster IP addresses. You need a secondary range of size /20 or larger. A /20 range of IP addresses results in 2(32-20) = 212 ≈ 4k IP addresses.

Node creation is limited to the available addresses to allocate from the Pod address range. Node creation fails when there are no more available IP addresses in the Pod address range (at least 256 for one additional node creation) with the error Instance [instance name] creation failed: IP space of [cluster subnet] is exhausted.

If you are using user-managed secondary ranges, refer to the table below to ensure you can reach the maximum nodes of a given cluster subnet size.

Subnet size for nodes Maximum nodes Maximum Pod IP addresses needed Recommended Pod address range
/29 4 1,024 /21
/28 12 3,072 /20
/27 28 7,168 /19
/26 60 15,360 /18
/25 124 31,744 /17
/24 252 64,512 /16
/23 508 130,048 /15
/22 1,020 261,120 /14
/21 2,044 523,264 /13
/20 4,092 1,047,552 /12
/19 8,188 2,096,128 /11 (maximum Pod address range)

Defaults and Limits for Range Sizes

Range Default size Minimum size Maximum size
Nodes (cluster subnet)

/20 = 212 = 4,096 - 4 reserved IP addresses = 4092 IP addresses for Nodes

/29 = 23 = 8 - 4 reserved IP addresses = 4 IP addresses for nodes

/7 = 225 = approximately 33 million IP addresses for nodes

Pods (secondary range)

/14 = 218 = 262,114 IP addresses allocated for Pods

/24 = 28 = 256 IP addresses allocated for Pods


/21 = 211 = 2,048 IP addresses allocated for Pods (GKE managed ranges)

/11 = 221 = approximately 2 million IP addresses

Services (secondary range)

/20 = 212 = 4,096 IP addresses allocated for Services

/27 = 25 = 32 IP addresses allocated for Services

/16 = 216 = 65,536 IP addresses allocated for Services

Creating a VPC-native cluster

When you create a cluster, you get a VPC-native cluster by default. You can use the Google Cloud Platform Console, the gcloud command-line tool, or the GKE API.

gcloud

To create the VPC-native cluster, run the following command, where [CLUSTER_NAME] is the name you choose for the cluster:

gcloud container clusters create [CLUSTER_NAME]

Console

To create a VPC-native cluster, perform the following steps:

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Configure your cluster as desired. Then, click Advanced options.

  4. In the VPC-native section, leave Enable VPC-native (using alias IP) selected.

  5. Click Create.

API

To create a VPC-native cluster, define a IPAllocationPolicy object in your cluster resource:

{
  "name": [CLUSTER_NAME],
  "description": [DESCRIPTION],
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "clusterIpv4CidrBlock"      : string,
    "servicesIpv4CidrBlock"     : string,
    "clusterSecondaryRangeName" : string,
    "servicesSecondaryRangeName": string,

  },
  ...
}

where:

  • "clusterIpv4CidrBlock" is the size/location of the CIDR range for Pods. Determines the size of the secondary range for Pods, and can be IP/size in CIDR notation (such as 10.0.0.0/14) or /size (like /14). An empty space with the given size is chosen from the available space in your VPC. If left blank, a valid range is found and created with a default size.
  • "servicesIpv4CidrBlock" is the size/location of the CIDR range for Services. See description of "clusterIpv4CidrBlock".
  • "clusterSecondaryRangeName" is the name of the secondary range for Pods. The secondary range must already exist and belong to the subnetwork associated with the cluster (such as the subnetwork specified with the --subnetwork flag).
  • "serviceSecondaryRangeName" is the name of the secondary range for Services. The secondary range must already exist and belong to the subnetwork associated with the cluster (such as the subnetwork specified with by --subnetwork flag).

For more cluster examples, see Examples.

Verifying the Cluster's Secondary Ranges

After you create a VPC-native cluster, you should verify the cluster's ranges.

gcloud

To verify the cluster, run the following command:

gcloud container clusters describe [CLUSTER_NAME]

In the command output, look under the ipAllocationPolicy field:

  • clusterIpv4Cidr is the secondary range for Pods
  • servicesIpv4Cidr is the secondary range for Services

Console

To verify the cluster, perform the following steps:

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Select the desired cluster.

The secondary ranges are displayed in the Cluster section under the Details tab:

  • Container address range is the secondary range for Pods
  • Service address range is the secondary range for Services

Examples

The following sections provide examples for using VPC-native clusters.

Creating a cluster with specific IP ranges

gcloud

This command creates my-cluster with the given Pod and Service ranges. Secondary ranges are automatically created, attached to the default subnetwork, and managed by GKE:

gcloud container clusters create my-cluster \
  --enable-ip-alias --cluster-ipv4-cidr=10.0.0.0/14 \
  --services-ipv4-cidr=10.4.0.0/19

Console

This creates my-cluster with the given Pod and Service ranges. Secondary ranges are automatically created, attached to the default subnetwork, and managed by GKE:

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Configure your cluster as desired. Then, click Advanced options.

  4. In the VPC-native section, leave Enable VPC-native (Using alias IP) selected.

  5. Fill in the Pod address range field with the pod range (example: 10.0.0.0/14)

  6. Fill in the Service address range field with the service range (example: 10.4.0.0/19)

Creating a cluster with specific sizes on a different subnetwork

This command creates my-cluster with the given Pod and Service range sizes. CIDR range location is determined from the free space in the VPC. Secondary ranges are created and attached to the subnetwork my-subnet. The secondary ranges are managed by GKE:

gcloud container clusters create my-cluster \
  --subnetwork my-subnet \

  --cluster-ipv4-cidr=/16 \
  --services-ipv4-cidr=/22

Creating a cluster with existing secondary ranges

For details on creating secondary ranges, refer to Alias IP Ranges Overview.

You need to create two secondary ranges on the subnet, one called my-pods for Pods and another called my-services for Services:

gcloud compute networks subnets update my-subnet \
    --add-secondary-ranges my-pods=10.0.0.0/16,my-services=10.1.0.0/16 \
    --region us-central1

Then, to create your cluster:

gcloud

Run the following command:

gcloud container clusters create my-cluster \
--enable-ip-alias \
--cluster-secondary-range-name=my-pods \
--services-secondary-range-name=my-services

Console

Perform the following steps:

  1. Visit the Google Kubernetes Engine menu in GCP Console.

    Visit the Google Kubernetes Engine menu

  2. Click Create cluster.

  3. Configure your cluster as desired. Then, click Advanced options.

  4. From the VPC-native section, leave Enable VPC-native (using alias IP) selected.

  5. Set Automatically create secondary ranges to false.

  6. Select the Network and Node subnet you want to use with your cluster.

  7. Select the Pod secondary range and Services secondary range to use with your cluster.

Creating a subnetwork automatically

The following procedures create a new VPC-native cluster with an automatically-generated subnetwork.

gcloud

To create a new cluster that uses Alias IP addresses, run the following command:

gcloud container clusters create --enable-ip-alias --create-subnetwork name=my-cluster-subnet

In this command, the new cluster is automatically configured with IP address ranges and a subnetwork. You can either provide a name for the subnetwork (in this example, name=my-cluster-subnet) or provide an empty string ("") to have a name automatically generated.

To configure the cluster yourself, run the following command:

gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias \
--create-subnetwork="" --cluster-ipv4-cidr [RANGE] --services-ipv4-cidr [RANGE] ]

In this command:

  • [CLUSTER_NAME] is the name you choose for the cluster.
  • --create-subnetwork flag causes a subnetwork for the cluster to be automatically created.
  • The --cluster-ipv4-cidr flag indicates the size and location of

    the cluster's CIDR range. [RANGE] can be in the form of [IP address]/[SIZE], such as 10.0.0.0/18, or simply /[SIZE], which causes the IP address to be automatically assigned. If this flag is omitted, a CIDR range is automatically assigned with a default size.

  • The --services-ipv4-cidr flag indicates the size and location of the Service's CIDR range. The [RANGE] specifications are identical to --cluster-ipv4-cidr. The range cannot overlap with --cluster-ip4-cidr and vice-versa. If this flag is omitted, a CIDR range is automatically assigned with a default size.

API

To create a VPC-native cluster, define an IPAllocationPolicy object in your cluster resource:

{
  "name": [CLUSTER_NAME],
  "description": [DESCRIPTION],
  ...
  "ipAllocationPolicy": {
    "useIpAliases": true,
    "createSubnetwork": true,
    "subnetworkName": [SUBNET_NAME]
  },
  ...
}

createSubnetwork automatically creates and provisions a subnetwork for the cluster. subnetworkName is optional; if left empty, a name is automatically chosen for the subnetwork.

Troubleshooting

This section provides guidance for resolving issues related to VPC-native clusters.

IP aliases cannot be used with a legacy network

VPC-native is the default for new clusters created with Google Cloud Platform Console. However, you cannot create a VPC-native cluster in a legacy network.

To create a cluster in a legacy network, create a routes-based cluster.

The resource 'projects/[PROJECT_NAME]/regions/XXX/subnetworks/default' is not ready

Potential causes
There are parallel operations on the same subnet. For example, another VPC-native cluster is being created, or a secondary range is being added or deleted on the subnet.
Resolution
Retry the command.

Invalid value for field 'resource.secondaryIpRanges[1].ipCidrRange': 'XXX'. Invalid IPCidrRange: XXX conflicts with existing subnetwork 'default' in region 'XXX'."

Potential causes

Another VPC-native cluster is being created at the same time, and is attempting to allocate the same ranges.

The same secondary range is being added to the subnetwork.

Resolution

If this error is returned on cluster creation when no secondary ranges were specified, retry the cluster creation command.

Not enough free IP space for Pod

Symptoms

Cluster is stuck in a provisioning state for an extended period of time

Cluster creation returns a Managed Instance Group (MIG) error

New nodes cannot be added to an existing cluster

Potential causes

Unallocated space in the Pod secondary range is not large enough for the nodes requested in the cluster. For example, if a user specifies a /23 secondary range for Pods, and the user requests more than two nodes, then the cluster can become stuck in the provisioning state. Refer to Considerations for cluster sizing for guidance in properly sizing IP address ranges.

If this issue occurs during cluster creation, delete the cluster stuck in the provisioning state and create another with a secondary range that has sufficient space for the cluster. For example, if a user specifies a secondary CIDR Range smaller than /24 and assigns it to a cluster, then the cluster can become stuck in the provisioning state.

Note that a subnet can have a maximum of 30 secondary ranges. And each VPC-native cluster requires at least two secondary ranges: one for Pods and one for Services.

If this issue occurs during node pool resize, you must make room for the new nodes by removing existing nodes.

What's next

Hai trovato utile questa pagina? Facci sapere cosa ne pensi:

Invia feedback per...

Kubernetes Engine Documentation