This page explains how to configure VPC-native clusters in Google Kubernetes Engine (GKE).
To learn more about the benefits and requirements of VPC-native clusters, see the overview for VPC-native clusters.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI.
Procedures
Use the following procedures to create a VPC-native cluster and verify its configured Pod and Service IP address ranges.
Create a cluster in an existing subnet
The following instructions demonstrate how to create a VPC-native GKE cluster in an existing subnet with your choice of secondary range assignment method.
gcloud
To use a secondary range assignment method of managed by GKE:
gcloud container clusters create CLUSTER_NAME \ --region=COMPUTE_REGION \ --enable-ip-alias \ --subnetwork=SUBNET_NAME \ --cluster-ipv4-cidr=POD_IP_RANGE \ --services-ipv4-cidr=SERVICES_IP_RANGE
To use a secondary range assignment method of user-managed:
gcloud container clusters create CLUSTER_NAME \ --region=COMPUTE_REGION \ --enable-ip-alias \ --subnetwork=SUBNET_NAME \ --cluster-secondary-range-name=SECONDARY_RANGE_PODS \ --services-secondary-range-name=SECONDARY_RANGE_SERVICES
Replace the following:
CLUSTER_NAME
: the name of the GKE cluster.COMPUTE_REGION
: the compute region for the cluster. To create a zonal cluster, replace this flag with--zone=COMPUTE_ZONE
, whereCOMPUTE_ZONE
is a compute zone.SUBNET_NAME
: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster. If omitted, GKE attempts to use a subnet in thedefault
VPC network in the cluster's region.- If the secondary range assignment method is managed by
GKE:
POD_IP_RANGE
: an IP address range in CIDR notation, such as10.0.0.0/14
, or the size of a CIDR block's subnet mask, such as/14
. This is used to create the subnet's secondary IP address range for Pods. If you omit the--cluster-ipv4-cidr
option, GKE chooses a/14
range (218 addresses) automatically. The automatically chosen range is randomly selected from10.0.0.0/8
(a range of 224 addresses) and will not include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. If you use these any of these, you should specify--cluster-ipv4-cidr
to prevent conflicts.SERVICES_IP_RANGE
: an IP address range in CIDR notation (for example,10.4.0.0/19
) or the size of a CIDR block's subnet mask (for example,/19
). This is used to create the subnet's secondary IP address range for Services.
- If the secondary range assignment method is user-managed:
SECONDARY_RANGE_PODS
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Pods.SECONDARY_RANGE_SERVICES
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Services.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
From the navigation pane, under Cluster, click Networking.
In the Network drop-down list, select a VPC.
In the Node subnet drop-down list, select a subnet for the cluster.
Ensure the Enable VPC-native traffic routing (uses alias IP) checkbox is selected.
Select the Automatically create secondary ranges checkbox if you want the secondary range assignment method to be managed by GKE. Clear this checkbox if you have already created secondary ranges for the chosen subnet and would like the secondary range assignment method to be user-managed.
In the Pod address range field, enter a pod range, such as
10.0.0.0/14
.In the Service address range field, enter a service range, such as
10.4.0.0/19
.Configure your cluster.
Click Create.
Terraform
You can create a VPC-native cluster via Terraform using a Terraform module.
For example, you can add the following block to your Terraform configuration:
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google"
version = "~> 12.0"
project_id = "PROJECT_ID"
name = "CLUSTER_NAME"
region = "COMPUTE_REGION"
network = "NETWORK_NAME"
subnetwork = "SUBNET_NAME"
ip_range_pods = "SECONDARY_RANGE_PODS"
ip_range_services = "SECONDARY_RANGE_SERVICES"
}
Replace the following:
PROJECT_ID
: your project ID.CLUSTER_NAME
: the name of the GKE cluster.COMPUTE_REGION
: the compute region for the cluster.NETWORK_NAME
: the name of an existing network.SUBNET_NAME
: the name of an existing subnet. The subnet's primary IP address range is used for nodes. The subnet must exist in the same region as the one used by the cluster.SECONDARY_RANGE_PODS
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Pods.SECONDARY_RANGE_SERVICES
: the name of an existing secondary IP address range in the specifiedSUBNET_NAME
. GKE uses the entire subnet secondary IP address range for the cluster's Services.
API
When you create a VPC-native cluster, you define an IPAllocationPolicy object. You can reference existing subnet secondary IP address ranges or you can specify CIDR blocks. Reference existing subnet secondary IP address ranges to create a cluster whose secondary range assignment method is user-managed. Provide CIDR blocks if you want the range assignment method to be managed by GKE.
{
"name": CLUSTER_NAME,
"description": DESCRIPTION,
...
"ipAllocationPolicy": {
"useIpAliases": true,
"clusterIpv4CidrBlock" : string,
"servicesIpv4CidrBlock" : string,
"clusterSecondaryRangeName" : string,
"servicesSecondaryRangeName": string,
},
...
}
This output includes the following values:
"clusterIpv4CidrBlock"
: the size/location of the CIDR range for Pods. Determines the size of the secondary range for Pods, and can be IP/size in CIDR notation, such as10.0.0.0/14
, or /size, such as/14
. An empty space with the given size is chosen from the available space in your VPC. If left blank, a valid range is found and created with a default size."servicesIpv4CidrBlock"
: the size/location of the CIDR range for Services. See description of"clusterIpv4CidrBlock"
."clusterSecondaryRangeName"
: the name of the secondary range for Pods. The secondary range must already exist and belong to the subnetwork associated with the cluster, such as the subnetwork specified with the--subnetwork
flag."serviceSecondaryRangeName"
: the name of the secondary range for Services. The secondary range must already exist and belong to the subnetwork associated with the cluster, such as the subnetwork specified with by--subnetwork
flag.
Create a cluster and select the control plane IP range
By default, public clusters with Private Service Connect use the primary subnet range to provision the private control plane endpoint. You can override this default setting by selecting a different subnet range. This can be done at cluster creation time only.
gcloud
Create a public cluster with Private Service Connect:
gcloud container clusters create CLUSTER_NAME \
--private-endpoint-subnetwork=SUBNET_NAME \
--region=COMPUTE_REGION
Replace the following:
CLUSTER_NAME
: the name of the GKE cluster.SUBNET_NAME
: the name of an existing subnet.COMPUTE_REGION
: the compute region for the cluster. To create a zonal cluster, replace this flag with--zone=COMPUTE_ZONE
, whereCOMPUTE_ZONE
is a compute zone.
GKE creates a public cluster with Private Service Connect.
Console
Prerequisite
To assign a subnet to the control plane of a new cluster, you must add a subnet first.
Create a cluster that uses your subnet to provision the control plane
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
For the Name, enter your cluster name.
For Standard clusters, from the navigation pane, under Cluster, click Networking.
Select the Public cluster radio button.
In the Advanced networking options section, select the Override control plane's default private endpoint subnet checkbox.
In the Private endpoint subnet list, select your created subnet.
Click Done. Add additional authorized networks as needed.
Create a cluster and subnet simultaneously
The following directions demonstrate how to create a VPC-native GKE cluster and subnet at the same time. The secondary range assignment method is managed by GKE when you perform these two steps with one command.
gcloud
To create a VPC-native cluster and subnet simultaneously:
gcloud container clusters create CLUSTER_NAME \
--region=COMPUTE_REGION \
--enable-ip-alias \
--create-subnetwork name=SUBNET_NAME,range=NODE_IP_RANGE \
--cluster-ipv4-cidr=POD_IP_RANGE \
--services-ipv4-cidr=SERVICES_IP_RANGE
Replace the following:
CLUSTER_NAME
: the name of the GKE cluster.COMPUTE_REGION
: the compute region for the cluster. To create a zonal cluster, replace this flag with--zone=COMPUTE_ZONE
, whereCOMPUTE_ZONE
is a compute zone.SUBNET_NAME
: the name of the subnet to create. The subnet's region is the same region as the cluster (or the region containing the zonal cluster). Use an empty string (name=""
) if you want GKE to generate a name for you.NODE_IP_RANGE
: an IP address range in CIDR notation, such as10.5.0.0/20
, or the size of a CIDR block's subnet mask, such as/20
. This is used to create the subnet's primary IP address range for nodes. If omitted, GKE chooses an available IP range in the VPC with a size of/20
.POD_IP_RANGE
: an IP address range in CIDR notation, such as10.0.0.0/14
, or the size of a CIDR block's subnet mask, such as/14
. This is used to create the subnet's secondary IP address range for Pods. If omitted, GKE uses a randomly chosen/14
range containing 218 addresses. The automatically chosen range is randomly selected from10.0.0.0/8
(a range of 224 addresses) and does not include IP address ranges allocated to VMs, existing routes, or ranges allocated to other clusters. The automatically chosen range might conflict with reserved IP addresses, dynamic routes, or routes within VPCs that peer with this cluster. If you use these any of these, you should specify--cluster-ipv4-cidr
to prevent conflicts.SERVICES_IP_RANGE
: an IP address range in CIDR notation, such as10.4.0.0/19
, or the size of a CIDR block's subnet mask, such as/19
. This is used to create the subnet's secondary IP address range for Services. If omitted, GKE uses/20
, the default Services IP address range size.
Console
You cannot create a cluster and subnet simultaneously using the Google Cloud console. Instead, first create a subnet then create the cluster in an existing subnet.
API
To create a VPC-native cluster, define an [IPAllocationPolicy] object in your cluster resource:
{
"name": CLUSTER_NAME,
"description": DESCRIPTION,
...
"ipAllocationPolicy": {
"useIpAliases": true,
"createSubnetwork": true,
"subnetworkName": SUBNET_NAME
},
...
}
The createSubnetwork
field automatically creates and provisions a
subnetwork for the cluster. The subnetworkName
field is optional; if left
empty, a name is automatically chosen for the subnetwork.
Use non-RFC 1918 private IP address ranges
GKE clusters can use private IP address ranges outside of the RFC 1918 ranges for nodes, Pods, and Services. See valid ranges in the VPC network documentation for a list of non-RFC 1918 private ranges that can be used as internal IP addresses for subnet ranges.
Non-RFC 1918 private IP address ranges are compatible with both private clusters and non-private clusters.
Non-RFC 1918 private ranges are subnet ranges — you can use them exclusively or in conjunction with RFC 1918 subnet ranges. Nodes, Pods, and Services continue to use subnet ranges as described in IP ranges for VPC-native clusters. If you use non-RFC 1918 ranges, keep the following in mind:
Subnet ranges, even those using non-RFC 1918 ranges, must be assigned manually or by GKE before the cluster's nodes are created. You cannot switch to or cease using non-RFC 1918 subnet ranges unless you replace the cluster.
Internal TCP/UDP load balancers only use IP addresses from the subnet's primary IP address range. To create an internal TCP/UDP load balancer with a non-RFC 1918 address, your subnet's primary IP address range must be non-RFC 1918.
Destinations outside your cluster might have difficulties receiving traffic from private, non-RFC 1918 ranges. For example, RFC 1112 (class E) private ranges are typically used as multicast addresses. If a destination outside of your cluster cannot process packets whose sources are private IP addresses outside of the RFC 1918 range, you can do both of the following:
Use an RFC 1918 range for the subnet's primary IP address range. This way, nodes in the cluster use RFC 1918 addresses.
Ensure that your cluster is running the IP masquerade agent and that the destinations are not in the
nonMasqueradeCIDRs
list. This way, packets sent from Pods have their sources changed (SNAT) to node addresses, which are RFC 1918.
Enable privately used public IP address ranges
GKE clusters can privately use certain public IP address ranges as internal, subnet IP address ranges. You can privately use any public IP address except for certain restricted ranges as described the VPC network documentation.
Your cluster must be a VPC-native cluster in order to use privately used public IP address ranges. Routes-based clusters are not supported.
Privately used public ranges are subnet ranges – you can use them exclusively or in conjunction with other subnet ranges that use private addresses. Nodes, Pods, and Services continue to use subnet ranges as described in IP ranges for VPC-native clusters. Keep the following in mind when re-using public IP addresses privately:
When you use a public IP address range as a subnet range, your cluster can no longer communicate with systems on the Internet that use that public range – the range becomes an internal IP address range in the cluster's VPC network.
Subnet ranges, even those that privately use public IP address ranges, must be assigned manually or by GKE before the cluster's nodes are created. You cannot switch to or cease using privately used public IP addresses unless you replace the cluster.
GKE by default implements SNAT on the nodes to public IP destinations. When using privately used public IP address ranges for the Pod CIDR this would result in the SNAT rules applied to Pod to Pod traffic. To avoid this you have 2 options:
- Create your cluster with the
--disable-default-snat
flag. For more details about this flag, refer to IP masquerading in GKE. - Configure the configMap
ip-masq-agent
including in thenonMasqueradeCIDRs
list at least the Pod CIDR, the Service CIDR, and the nodes subnet.
Example cluster with privately used public ranges
The following example uses the gcloud CLI to create a cluster that uses privately re-used public IP address ranges. You must use the following flag:
--enable-ip-alias
: This creates a VPC-native cluster, which is required when you privately use public IP address ranges.
The following command creates a VPC-native, private cluster with the following properties:
- Nodes use the
10.0.0.0/24
primary IP address range of the subnet. - Pods privately use the
5.0.0.0/16
public IP address range as the subnet's secondary IP address range for Pods. - Services privately use the
5.1.0.0/16
public IP address range as the subnet's secondary IP address range for Services. - The internal IP address range for the control plane is
172.16.0.16/28
.
gcloud container clusters create CLUSTER_NAME \
--enable-ip-alias \
--enable-private-nodes \
--disable-default-snat \
--zone=COMPUTE_ZONE \
--create-subnetwork name=cluster-subnet,range=10.0.0.0/24 \
--cluster-ipv4-cidr=5.0.0.0/16 \
--services-ipv4-cidr=5.1.0.0/16 \
--master-ipv4-cidr=172.16.0.16/28
Use an IPv4/IPv6 dual-stack network to create a dual-stack cluster
You can create a cluster with IPv4/IPv6 dual-stack networking on a new or existing dual-stack subnet. This section shows you how to complete all the following tasks:
- Create a dual-stack subnet (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).
- Update an existing subnet to a dual-stack subnet (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).
- Create a cluster with dual-stack networking (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later). GKE Autopilot clusters default to a dual-stack cluster when you use a dual-stack subnet. After cluster creation, you can update the Autopilot cluster to be IPv4-only.
- Create a dual-stack cluster and a dual-stack subnet simultaneously (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).
To learn more about the benefits and requirements of GKE clusters with dual-stack networking overview, see the VPC-native cluster documentation.
Create a dual-stack subnet
To create a dual-stack subnet, run the following command:
gcloud compute networks subnets create SUBNET_NAME \
--stack-type=ipv4-ipv6 \
--ipv6-access-type=ACCESS_TYPE \
--network=NETWORK_NAME \
--range=PRIMARY_RANGE \
--region=COMPUTE_REGION
Replace the following:
SUBNET_NAME
: the name of the subnet that you choose.ACCESS_TYPE
: the routability to the public internet. UseINTERNAL
for private clusters andEXTERNAL
for public clusters. If--ipv6-access-type
is not specified, the default access type isEXTERNAL
.NETWORK_NAME
: the name of the network that will contain the new subnet. This network must meet the following conditions:- It must be a VPC network.
- It must be a custom mode network. For more information, see how to switch a VPC network from auto mode to custom mode.
- If you replace the
ACCESS_TYPE
withINTERNAL
, the network must use Unique Local IPv6 Unicast Addresses (ULA).
PRIMARY_RANGE
: the primary IPv4 IP address range for the new subnet, in CIDR notation. For more information, see Subnet ranges.COMPUTE_REGION
: the compute region for the cluster.
You can use this subnet when creating an Autopilot or a Standard cluster.
Update an existing subnet to a dual-stack subnet
To update an existing subnet to a dual-stack subnet, run the following command. Updating a subnet does not affect any existing IPv4 clusters in the subnet.
gcloud compute networks subnets update SUBNET_NAME \
--stack-type=ipv4-ipv6 \
--ipv6-access-type=ACCESS_TYPE \
--region=COMPUTE_REGION
Replace the following:
SUBNET_NAME
: the name of the subnet.ACCESS_TYPE
: the routability to the public internet. UseINTERNAL
for private clusters andEXTERNAL
for public clusters. If--ipv6-access-type
is not specified, the default access type isEXTERNAL
.COMPUTE_REGION
: the compute region for the cluster.
Create a cluster with dual-stack networking
To create a cluster with an existing dual-stack
subnet, you can use gcloud CLI
or the Google Cloud console:
gcloud
For Autopilot clusters, run the following command:
gcloud container clusters create-auto CLUSTER_NAME \ --region REGION \ --cluster-version VERSION \ --network VPC_NETWORK_NAME \ --subnetwork SUBNET_NAME
Replace the following:
CLUSTER_NAME
: the name of your new Autopilot cluster.REGION
: the region for your cluster, such asus-central1
.VERSION
: the latest version of GKE 1.25. Check the GKE release notes to learn the latest version released. You can also use the--release-channel
option to select a release channel.NETWORK_NAME
: the name of a VPC network that contains the subnet. This VPC network must be a custom mode network. For more information, see how to switch a VPC network from auto mode to custom mode.SUBNET_NAME
: the name of the dual-stack subnet. To learn more, see how to create a dual-stack subnet. GKE Autopilot clusters default to a dual-stack cluster when you use a dual-stack subnet. After cluster creation, you can update the Autopilot cluster to be IPv4-only.
For Standard clusters, run the following command:
gcloud container clusters create CLUSTER_NAME \ --enable-ip-alias \ --enable-dataplane-v2 \ --stack-type=ipv4-ipv6 \ --cluster-version=VERSION \ --network=NETWORK_NAME \ --subnetwork=SUBNET_NAME \ --region=COMPUTE_REGION
Replace the following:
CLUSTER_NAME
: the name of the new cluster.VERSION
: the latest version of GKE 1.24. Check the GKE release notes to learn the latest version released. You can also use the--release-channel
option to select a release channel.NETWORK_NAME
: the name of a VPC network that contains the subnet. This VPC network must be a custom mode network that uses Unique Local IPv6 Unicast Addresses (ULA). For more information, see how to switch a VPC network from auto mode to custom mode.SUBNET_NAME
: the name of the subnet.COMPUTE_REGION
: the compute region for the cluster.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
In the Standard or Autopilot section, click Configure.
Configure your cluster as needed.
From the navigation pane, under Cluster, click Networking.
In the Network list, select the name of your network.
In the Node subnet list, select the name of your dual-stack subnet.
For Standard clusters, select the IPv4 and IPv6 (dual stack) radio button. This option is available only if you selected a dual-stack subnet.
Autopilot clusters default to a dual-stack cluster when you use a dual-stack subnet.
Click Create.
Create a dual-stack cluster and a subnet simultaneously
You can create a subnet and a dual-stack cluster simultaneously. GKE creates an IPv6 subnet and assigns an external IPv6 primary range to the subnet.
For Autopilot clusters, run the following command:
gcloud container clusters create-auto CLUSTER_NAME \ --region REGION \ --cluster-version VERSION \ --network VPC_NETWORK_NAME \ --create-subnetwork name=SUBNET_NAME
Replace the following:
CLUSTER_NAME
: the name of your new Autopilot cluster.REGION
: the region for your cluster, such asus-central1
.VERSION
: the latest version of GKE 1.25. Check the GKE release notes to learn the latest version released. You can also use the--release-channel
option to select a release channel.NETWORK_NAME
: the name of a VPC network that contains the subnet. This VPC network must be a custom mode network that uses Unique Local IPv6 Unicast Addresses (ULA). For more information, see how to switch a VPC network from auto mode to custom mode.SUBNET_NAME
: the name of the new subnet. GKE can create the subnet based on your organization policies:- If your organization policies allow dual-stack, and the network is custom mode, GKE creates a dual-stack subnet and assigns an external IPv6 primary range to the subnet .
- If your organization policies don't allow dual-stack, or if the network is in auto mode, GKE creates a single stack (IPv4) subnet.
For Standard clusters, run the following command:
gcloud container clusters create CLUSTER_NAME \ --enable-ip-alias \ --stack-type=ipv4-ipv6 \ --ipv6-access-type=ACCESS_TYPE \ --cluster-version=VERSION \ --network=NETWORK_NAME \ --create-subnetwork name=SUBNET_NAME,range=PRIMARY_RANGE \ --region=COMPUTE_REGION
Replace the following:
CLUSTER_NAME
: the name of the new cluster that you choose.ACCESS_TYPE
: the routability to the public internet. UseINTERNAL
for private clusters andEXTERNAL
for public clusters. If--ipv6-access-type
is not specified, the default access type isEXTERNAL
.VERSION
: the latest version of GKE 1.24. Check the GKE release notes to learn the latest version released. You can also use the--release-channel
option to select a release channel.NETWORK_NAME
: the name of the network that will contain the new subnet. This network must meet the following conditions:- It must be a VPC network.
- It must be a custom mode network. For more information, see how to switch a VPC network from auto mode to custom mode.
- If you replace the
ACCESS_TYPE
withINTERNAL
, the network must use Unique Local IPv6 Unicast Addresses (ULA).
SUBNET_NAME
: the name of the new subnet that you choose.PRIMARY_RANGE
: the primary IPv4 address range for the new subnet, in CIDR notation. For more information, see Subnet ranges.COMPUTE_REGION
: the compute region for the cluster.
Update the stack type on an existing cluster
You can change the stack type of an existing cluster. Before you change the stack type on an existing cluster, consider the following limitations:
- Changing the stack type is supported in Autopilot and Standard, in new GKE clusters created in version 1.25 or later. GKE clusters that have been upgraded from versions 1.24 to versions 1.25 or 1.26 might get validation errors when enabling dual-stack network. In case of errors, contact the Google Cloud support team.
- Changing the stack type is a disruptive operation because GKE restarts components in both the control plane and nodes.
- GKE respects your configured maintenance windows when
recreating nodes. This means that the cluster stack type won't be operational on
the cluster until the next maintenance window occurs. If you prefer not to wait,
you can manually upgrade the node pool by setting the
--cluster-version
flag to the same GKE version the control plane is already running. You must use the gcloud CLI if you use this workaround. For more information, see caveats for maintenance windows. Changing the stack type does not automatically change the IP family of existing Services. The following conditions apply:
- If you change a single stack to dual-stack, the existing Services remain single stack.
- If you change a dual-stack to single stack, the existing Services with
IPv6 addresses get into an error state. Delete the Service and create one with
the correct
ipFamilies
. To learn more, see an example of how to set up a Deployment.
To update an existing
VPC-native cluster, you can use gcloud CLI
or the Google Cloud console:
gcloud
Run the following command:
gcloud container clusters update CLUSTER_NAME \
--stack-type=STACK_TYPE
--region=COMPUTE_REGION
Replace the following:
CLUSTER_NAME
: the name of the cluster you want to update.STACK_TYPE
: the stack type. Replace with one of the following values:ipv4
: to update a dual-stack cluster to IPv4 only cluster. GKE uses the primary IPv4 address range of the cluster's subnet.ipv4-ipv6
: to update an existing IPv4 cluster to dual-stack. You can only change a cluster to dual-stack if the underlying subnet supports dual-stack. To learn more, see Update an existing subnet to a dual-stack subnet.
COMPUTE_REGION
: the compute region for the cluster.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Next to the cluster you want to edit, click more_vert Actions, then click edit Edit.
In the Networking section, next to Stack type, click edit Edit.
In the Edit stack type dialog, select the checkbox for the cluster stack type you need.
Click Save Changes.
Verify the stack type, Pod, and Service IP address ranges
After you create a VPC-native cluster, you can verify its Pod and Service ranges.
gcloud
To verify the cluster, run the following command:
gcloud container clusters describe CLUSTER_NAME
The output has an ipAllocationPolicy
block. The stackType
field
describes the type of network definition. For each type,
you can see the following network information:
IPv4 network information:
clusterIpv4Cidr
is the secondary range for Pods.servicesIpv4Cidr
is the secondary range for Services.
IPv6 network information (if a cluster has dual-stack networking):
ipv6AccessType
: The routability to the public internet.INTERNAL
for private IPv6 addresses andEXTERNAL
for public IPv6 addresses.subnetIpv6CidrBlock
: The secondary IPv6 address range for the new subnet.servicesIpv6CidrBlock
: The address range assigned for the IPv6 Services on the dual-stack cluster.
Console
To verify the cluster, perform the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to inspect.
The secondary ranges are displayed in the Networking section:
- Pod address range is the secondary range for Pods
- Service address range is the secondary range for Services
Troubleshooting
This section provides guidance for resolving issues related to VPC-native clusters. You can also view GKE IP address utilization insights.
The default network resource is not ready
- Symptoms
You get an error message similar to the following:
projects/[PROJECT_NAME]/regions/XXX/subnetworks/default
- Potential causes
There are parallel operations on the same subnet. For example, another VPC-native cluster is being created, or a secondary range is being added or deleted on the subnet.
- Resolution
Retry the command.
Invalid value for IPCidrRange
- Symptoms
You get an error message similar to the following:
resource.secondaryIpRanges[1].ipCidrRange': 'XXX'. Invalid IPCidrRange: XXX conflicts with existing subnetwork 'default' in region 'XXX'
- Potential causes
Another VPC-native cluster is being created at the same time and is attempting to allocate the same ranges in the same VPC network.
The same secondary range is being added to the subnetwork in the same VPC network.
- Resolution
If this error is returned on cluster creation when no secondary ranges were specified, retry the cluster creation command.
Not enough free IP space for Pods
- Symptoms
Cluster is stuck in a provisioning state for an extended period of time.
Cluster creation returns a Managed Instance Group (MIG) error.
When you add one or more nodes to a cluster, the follow error appears:
[IP_SPACE_EXHAUSTED] Instance 'INSTANCE_NAME' creation failed: IP space of 'projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME-SECONDARY_RANGE_NAME' is exhausted.
- Potential causes
Unallocated space in the Pod IP address range is not large enough for the nodes requested in the cluster. For example, if a cluster's Pod IP address range has a netmask whose size is
/23
(512 address), and the maximum Pods per node is 110, you cannot create any more than two nodes. Each node is assigned an alias IP address range with a netmask whose size is/24
.- Solutions
You can add Pod IP ranges to the cluster using discontiguous multi-Pod CIDR.
Create a replacement cluster after reviewing and planning appropriately sized primary and secondary IP address ranges. Refer to IP ranges for VPC-native clusters and IP range planning.
Create a new node pool with a smaller maximum number of Pods per node. If possible, migrate workloads to that node pool then delete the previous node pool. Reducing the maximum number of Pods per node allows you to support more nodes on a fixed secondary IP address range for Pods. Refer to Subnet secondary IP address range for Pods and Node limiting ranges for details about the calculations involved.
Confirm whether default SNAT is disabled
Use the following command to check the status of default SNAT:
gcloud container clusters describe CLUSTER_NAME
Replace CLUSTER_NAME
with the name of your cluster.
The output is similar to the following:
networkConfig:
disableDefaultSnat: true
network: ...
Cannot use --disable-default-snat
without --enable-ip-alias
This error message, and must disable default sNAT (--disable-default-snat)
before using public IP address privately in the cluster
, mean that you should
explicitly set the --disable-default-snat
flag when creating the cluster since
you are using public IP addresses in your private cluster.
If you see error messages like cannot disable default sNAT ...
, this means
the default SNAT can't be disabled in your cluster. Please review your cluster
configuration.
Debugging Cloud NAT with default SNAT disabled
If you have a private cluster created with the --disable-default-snat
flag and
have set up Cloud NAT for internet access and you aren't seeing internet-bound
traffic from your Pods, make sure that the Pod range is included in the
Cloud NAT configuration.
If there is a problem with Pod to Pod communication, examine the iptables rules on the nodes to verify that the Pod ranges are not masqueraded by iptables rules. For more information, see the GKE IP masquerade documentation. If you have not configured an IP masquerade agent for the cluster, GKE automatically ensures that Pod to Pod communication is not masqueraded. However, if an IP masquerade agent is configured, it will override the default IP masquerade rules. Verify that additional rules are configured in the IP masquerade agent to ignore masquerading the Pod ranges.
The dual-stack cluster network communication is not working as expected
- Potential causes
- The firewall rules created by the GKE cluster don't include the allocated IPv6 addresses.
- Resolution
- You can validate the firewall rule by following these steps:
Verify the firewall rule content:
gcloud compute firewall-rules describe FIREWALL_RULE_NAME
Replace
FIREWALL_RULE_NAME
with the name of the firewall rule.Each dual-stack cluster creates a firewall rule that allows nodes and pods to communicate with each other. The firewall rule content is similar to the following:
allowed: - IPProtocol: esp - IPProtocol: ah - IPProtocol: sctp - IPProtocol: tcp - IPProtocol: udp - IPProtocol: '58' creationTimestamp: '2021-08-16T22:20:14.747-07:00' description: '' direction: INGRESS disabled: false enableLogging: false id: '7326842601032055265' kind: compute#firewall logConfig: enable: false name: gke-ipv6-4-3d8e9c78-ipv6-all network: https://www.googleapis.com/compute/alpha/projects/my-project/global/networks/alphanet priority: 1000 selfLink: https://www.googleapis.com/compute/alpha/projects/my-project/global/firewalls/gke-ipv6-4-3d8e9c78-ipv6-all selfLinkWithId: https://www.googleapis.com/compute/alpha/projects/my-project/global/firewalls/7326842601032055265 sourceRanges: - 2600:1900:4120:fabf::/64 targetTags: - gke-ipv6-4-3d8e9c78-node
The
sourceRanges
value must be the same as thesubnetIpv6CidrBlock
. ThetargetTags
value must be the same as the tags on the GKE nodes. To fix this issue, update the firewall rule with the clusteripAllocationPolicy
block information.
Restrictions
- You cannot convert a VPC-native cluster into a routes-based cluster, and you cannot convert a routes-based cluster into a VPC-native cluster.
- VPC-native clusters require VPC networks. Legacy networks are not supported.
- As with any GKE cluster, Service (ClusterIP) addresses are only available from within the cluster. If you need to access a Kubernetes Service from VM instances outside of the cluster, but within the cluster's VPC network and region, create an internal TCP/UDP load balancer.
- If you use all of the Pod IP addresses in a subnet, you cannot replace the subnet's secondary IP address range without putting the cluster into an unstable state. However, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR.
What's next
- Read the GKE network overview.
- Learn about internal load balancing.
- Learn to set up a Multi Cluster Ingress.
- Learn about configuring authorized networks.
- Learn about creating cluster network policies.
- Learn how to create a routes-based cluster.