This page provides a general overview of VPC-native clusters in Google Kubernetes Engine (GKE).
In GKE, clusters can be distinguished according to the way they route traffic from one Pod to another Pod. A cluster that uses alias IP address ranges is called a VPC-native cluster. A cluster that uses custom static routes in a VPC network is called a routes-based cluster.
Benefits of VPC-native clusters
VPC-native clusters have several benefits:
Pod IP addresses are reserved in the VPC network before the Pods are created in your cluster. This prevents conflict with other resources in the VPC network and allows you to better plan IP address allocations.
Pod IP address ranges do not depend on custom static routes. They do not consume the system-generated and custom static routes quota. Instead, automatically-generated subnet routes handle routing for VPC-native clusters.
You can create firewall rules that apply to just Pod IP address ranges instead of any IP address on the cluster's nodes.
Pod IP address ranges, and subnet secondary IP address ranges in general, are accessible from on-premises networks connected with Cloud VPN or Cloud Interconnect using Cloud Routers.
Default cluster network mode
The default cluster network mode depends on how you create the cluster.
The following table lists the default cluster network mode for each cluster creation method.
|Cluster creation method||Cluster network mode|
|Google Cloud Console||VPC-native|
IP address ranges for VPC-native clusters
When you create a VPC-native cluster, you specify a subnet in a VPC network. The cluster uses three unique subnet IP address ranges:
- It uses the subnet's primary IP address range for all node IP addresses.
- It uses one secondary IP address range for all Pod IP addresses.
- It uses another secondary IP address range for all Service (cluster IP) addresses.
The following table provides a summary of IP address ranges for nodes, Pods, and Services:
Node IP addresses are assigned from the primary IP address range of the subnet associated with your cluster.
Both node IP addresses and the size of the subnet's secondary IP address range for Pods limit the number of nodes that a cluster can support. Refer to node limiting ranges for more information.
If you plan to create a 900-node cluster, the primary IP address range of
the cluster's subnet must be at least a
Pod IP addresses are taken from the cluster subnet's secondary IP
address range for Pods. Unless you set a different
of Pods per node, GKE allocates a
For a 900-node cluster supporting up to 110 Pods per node, you need 900 × 256 = 230,400 IP addresses for Pods. (Each node is allocated an alias IP range whose netmask's size is /24.) This cluster requires a subnet whose secondary IP range for Pods has a subnet mask no larger than /14. This secondary IP range range provides 2(32-14) = 218 = 262,144 IP addresses for Pods.
Refer to Subnet secondary IP address range for Pods for more information.
Service (cluster IP) addresses are taken from the cluster's subnet's secondary IP address range for Services. You must ensure this range is large enough to provide addresses for all the Kubernetes Services you host in your cluster.
For a cluster that runs up to 3000 Services, you need 3000 cluster IP addresses. You need a secondary range of size /20 or larger. A /20 range of IP addresses results in 2(32-20) = 212 = 4,096 IP addresses.
Refer to Subnet secondary IP address range for Services for more information.
Internal IP addresses
The IP addresses you use for your VPC-native cluster's subnets must come from a valid subnet range. The valid ranges include private IP addresses (RFC 1918 and others) and privately used public IP addresses. See Valid ranges and Restricted ranges in the Virtual Private Cloud documentation for more information about valid subnet ranges.
See Using non-RFC 1918 private IP address ranges for instructions on enabling the use of these ranges.
Secondary range assignment methods
You can assign Pod IP address ranges and Service (ClusterIP) address ranges to a VPC-native cluster using one of two methods:
Managed by GKE (default)
GKE can create and manage the subnet's secondary ranges for
you. When you create the cluster, you specify either a complete CIDR range or
the size of a netmask for both the Pods and Services. For example, you can
10.1.0.0/16 for Pods and
10.2.0.0/20 for Services, or you can
/16 for Pods and
/20 for Services.
If you create the cluster and subnet simultaneously Pod and Service IP address ranges are managed by GKE.
You can create the subnet's secondary IP address ranges, then create a cluster that uses those ranges. If you manually create the secondary ranges, you must manage them yourself.
The smallest IP address range you can create is a /28. However, you should use a range that is large enough to allow at least one node to be created. The minimum usable range depends on the maximum number of Pods per Node. Refer to the table in Optimizing IP address allocation for the minimum usable CIDR range for different values of Maximum Pods per Node.
If you exhaust your IP address range for Pods, you must either create a new
cluster with a larger Pod address range or recreate your node pools after
for the node pools.
Differences with routes-based clusters
The allocation scheme for Pod and Service (ClusterIP) addresses is different than the scheme used by a routes-based cluster. Instead of specifying a single CIDR for Pods and Services together, you must choose or create two secondary IP address ranges in cluster's subnet: one for Pods and another for Services.
Shared VPC considerations
When creating a VPC-native cluster in a Shared VPC environment, a project owner, editor, or IAM member with the Network Admin role in the Shared VPC host project must create the cluster's subnet and secondary IP address ranges manually. A service project admin who creates a cluster must at least have subnet-level permissions to the desired subnet in the Shared VPC network's host project.
In a Shared VPC environment, secondary IP address ranges cannot be managed by GKE. A Network Admin in the Shared VPC host project must create the subnet and secondary IP address ranges before you can create the cluster. For an example showing how to set up a VPC-native cluster in a Shared VPC network, refer to Setting up clusters with Shared VPC.
IP address range planning
Use the information in the following sections to help you calculate sizes for primary and secondary IP address ranges of the subnet used by your cluster.
Subnet primary IP address range
Every subnet must have a primary IP address range. You can expand the primary IP address range at any time, even when Google Cloud resources use the subnet; however, you cannot shrink or change a subnet's primary IP address scheme after the subnet has been created. The first two and last two IP addresses of a primary IP address range are reserved by Google Cloud.
The following table shows the maximum number of nodes you can create in all clusters that use the subnet, given the size of the subnet's primary IP address range.
|Subnet primary IP range||Maximum nodes|
Minimum size for a subnet's primary IP range
Default size of a subnet's primary IP range in auto mode networks
Maximum size for a subnet's primary IP range
You can use the following formulas to:
Calculate the maximum number of nodes, N, that a given netmask can support. Use S for the size of the netmask, whose valid range is between
N = 2(32 -S) - 4
Calculate the size of the netmask, S, required to support a maximum of N nodes:
S = 32 - ⌈log2(N + 4)⌉
⌈⌉is the ceiling (least integer) function, meaning round up to the next integer. The valid range for the size of the netmask, S, is between
Subnet secondary IP address range for Pods
Carefully plan your secondary IP address range for Pods. Though it is possible to replace a subnet's secondary IP address range, doing so is not supported because it has the potential to put the cluster in an unstable state.
However, you can create additional Pod IP address ranges using discontiguous multi-Pod CIDR.
The following table shows the maximum number of nodes and Pods you can create in
all clusters that use the subnet, given the size of the subnet's secondary IP
address range used by Pods. This table assumes the maximum number of Pods per
110 (the default
and largest possible Pod density).
|Subnet secondary IP range for Pods||Maximum Pod IP addresses||Maximum nodes||Maximum Pods|
Smallest possible Pod IP range when the secondary range assignment method is user-managed
|256 addresses||1 node||110 Pods|
Only possible when the secondary range assignment method is user-managed
|512 addresses||2 nodes||220 Pods|
Only possible when the secondary range assignment method is user-managed
|1,024 addresses||4 nodes||440 Pods|
Smallest possible Pod IP range when the secondary range assignment method is managed by GKE
|2,048 addresses||8 nodes||880 Pods|
|/20||4,096 addresses||16 nodes||1,760 Pods|
|/19||8,192 addresses||32 nodes||3,520 Pods|
|/18||16,384 addresses||64 nodes||7,040 Pods|
|/17||32,768 addresses||128 nodes||14,080 Pods|
|/16||65,536 addresses||256 nodes||28,160 Pods|
|/15||131,072 addresses||512 nodes||56,320 Pods|
Default size for the subnet's secondary IP range for Pods when the secondary range assignment method is managed by GKE
|262,144 addresses||1,024 nodes||112,640 Pods|
|/13||524,288 addresses||2,048 nodes||225,280 Pods|
|/12||1,048,576 addresses||4,096 nodes||450,560 Pods|
|/11||2,097,152 addresses||8,192 nodes||901,120 Pods|
|/10||4,194,304 addresses||16,384 nodes||1,802,240 Pods|
Largest possible Pod address range
|8,388,608 addresses||32,768 nodes||3,604,480 Pods|
If you have changed the maximum number of Pods per node, you can use the following formulas to calculate the maximum number of nodes and Pods that a subnet's secondary IP address range for Pods can support:
Calculate the size of the netmask of each node's Pod range, M.
M = 31 - ⌈log2(Q)⌉where:
- Q is the number of Pods per node
⌈⌉is the ceiling (least integer) function, meaning round up to the next integer
Calculate the maximum number of nodes, N, that the subnet's secondary IP address range for Pods can support:
N = 2(M - S)where:
- M is the size of the netmask of each node's alias IP address range for Pods, calculated in the first step
- S is the size of the subnet mask of the subnet's secondary IP address range
Calculate the maximum number of Pods, P, that the subnet's secondary IP address range for Pods can support:
P = N × Qwhere:
- N is the maximum number of nodes, calculated in the previous step
- Q is the number of Pods per node
Subnet secondary IP address range for Services
Carefully plan your secondary IP address range for Services. Because this is also a subnet secondary IP address range, you can only replace it when no Google Cloud resources use it. This range cannot be changed as long as a cluster uses it for Services (cluster IP addresses).
Unlike node and Pod IP address ranges, each cluster must have a unique subnet secondary IP address range for Services and cannot be sourced from a shared primary or secondary IP range.
The following table shows the maximum number of Services you can create in a single cluster using the subnet, given the size of the subnet's secondary IP address range for Services.
|Secondary IP range for Services||Maximum number of Services|
Smallest possible Service address range
Default size for the subnet's secondary IP range for Services when the secondary range assignment method is managed by GKE
Largest possible Service address range
Node limiting ranges
The maximum number of Pods and Services for a given GKE cluster is limited by the size of the cluster's secondary ranges. The maximum number of nodes in the cluster is limited by the size of the cluster's subnet's primary IP address range and the cluster's Pod address range.
The Cloud Console shows error messages like the following to indicate that either the subnet's primary IP address range or the cluster's Pod IP address range (the subnet's secondary IP address range for Pods) has been exhausted:
Instance [node name] creation failed: IP space of [cluster subnet] is exhausted
- Learn more about VPC peering.
- Learn how to create a VPC-native cluster in internal HTTP(S) load balancer.