Guidelines for creating scalable clusters

This document provides guidance to help you decide how to create, configure, and operate Google Kubernetes Engine (GKE) clusters which can accommodate workloads that approach the known limits of Kubernetes.

What is scalability?

In a Kubernetes cluster, scalability refers to the ability of the cluster to grow while staying within its service-level objectives (SLOs). Kubernetes also has its own set of SLOs

Kubernetes is a complex system, and its ability to scale is determined by multiple factors. Some of these factors include the type and number of nodes in a node pool, the types and numbers of node pools, the number of Pods available, how resources are allocated to Pods, and the number of Services or backends behind a Service.

Preparing for availability

Choosing a regional or zonal control plane

Due to architectural differences, regional clusters are better suited for high availability. Regional clusters have multiple control planes across multiple compute zones in a region, while zonal clusters have one control plane in a single compute zone.

If a zonal cluster is upgraded, the control plane VM experiences downtime during which the Kubernetes API is not available until the upgrade is complete.

In regional clusters, the control plane remains available during cluster maintenance like rotating IPs, upgrading control plane VMs, or resizing clusters or node pools. When upgrading a regional cluster, two out of three control plane VMs are always running during the rolling upgrade, so the Kubernetes API is still available. Similarly, a single-zone outage won't cause any downtime in the regional control plane.

However, the more highly available regional clusters come with certain trade-offs:

  • Changes to the cluster's configuration take longer because they must propagate across all control planes in a regional cluster instead of the single control plane in zonal clusters.

  • You might not be able to create or upgrade regional clusters as often as zonal clusters. If VMs cannot be created in one of the zones, whether from a lack of capacity or other transient problem, clusters cannot be created or upgraded.

Due to these trade-offs, zonal and regional clusters have different use cases:

  • Use zonal clusters to create or upgrade clusters rapidly when availability is less of a concern.
  • Use regional clusters when availability is more important than flexibility.

Carefully select the cluster type when you create a cluster because you cannot change it after the cluster is created. Instead, you must create a new cluster then migrate traffic to it. Migrating production traffic between clusters is possible but difficult at scale.

Choosing multi-zonal or single-zone node pools

To achieve high availability, the Kubernetes control plane and its nodes need to be spread across different zones. GKE offers two types of node pools: single-zone and multi-zonal.

To deploy a highly available application, distribute your workload across multiple compute zones in a region by using multi-zonal node pools which distribute nodes uniformly across zones.

If all your nodes are in the same zone, you won't be able to schedule Pods if that zone becomes unreachable. Using multi-zonal node pools has certains trade-offs:

  • GPUs are available only in specific zones. It may not be possible to get them in all zones in the region.

  • Round-trip latency between locations within a single region is expected to stay below 1ms on the 95th percentile. The difference in traffic latency between zonal and inter-zonal traffic should be negligible.

  • The price of egress traffic between zones in the same region is available on the Compute Engine pricing page.

Preparing to scale

Base infrastructure

Kubernetes workloads require networking, compute, and storage. You need to provide enough CPU and memory to run Pods. However, there are more parameters of underlying infrastructure that can influence performance and scalability of a GKE cluster.

Cluster networking

GKE offers two types of cluster networking: older route-based and newer VPC-native.

  • Routes-based cluster: Each time a node is added, a custom route is added to the routing table in the VPC network.

  • VPC-native cluster: In this mode, the VPC network has a secondary range for all Pod IP addresses. Each node is then assigned a slice of the secondary range for its own Pod IP addresses. This allows the VPC network to natively understand how to route traffic to Pods without relying on custom routes. A single VPC network can have up to 15,000 VMs. To learn more about the consequences of this limit, see the VMs per VPC network limit section.

Private clusters

In regular GKE clusters all nodes have public IP addresses. In private clusters, nodes only have internal IP addresses to isolate nodes from inbound and outbound connectivity to the Internet. GKE uses VPC network peering to connect VMs running the Kubernetes API server with the rest of the cluster. This allows higher throughput between GKE control planes and nodes, as traffic is not routed via public internet.

Using private clusters has the additional security benefit that nodes are not exposed to the Internet.

Cluster load balancing

GKE Ingress and Cloud Load Balancing configure and deploy load balancers to expose Kubernetes workloads outside the cluster and also to the public internet. The GKE Ingress and Service controllers deploy objects such as forwarding rules, URL maps, backend services, network endpoint groups, and more on behalf of GKE workloads. Each of these resources has inherent quotas and limits and these limits also apply in GKE. When any particular Cloud Load Balancing resource has reached its quota, it will prevent a given Ingress or Service from deploying correctly and errors will appear in the resource's events.

Here are some recommendations and scale limits to keep in mind when using GKE Ingress and Services:

  • Ingress (External & Internal) Services that are exposed through GKE: We recommend that you use Container-native load balancing with NEGs with Services exposed through GKE Ingress.
  • Ingress resources that don't use NEGs are only supported on clusters up to 1,000 nodes. Ingress with NEGs do not have a cluster node limit.
  • Internal Load Balancer Services: Services that deploy internal TCP/UDP load balancer are only supported on clusters up to 250 nodes.

If you need to scale higher contact your Google Cloud sales team to increase this limit. Note that Ingress for Internal HTTP(S) Load Balancing does not have per-cluster node limitations and so it can be used in place of the internal TCP/UDP load balancer for internal HTTP traffic.

DNS

Service discovery in GKE is provided through kube-dns which is a centralized resource to provide DNS resolution to Pods running inside the cluster. This can become a bottleneck on very large clusters or for workloads which have high request load. GKE automatically autoscales kube-dns based on the size of the cluster to increase its capacity. When this capacity is still not enough, GKE offers distributed, local resolution of DNS queries on each node with NodeLocal DNSCache. This provides a local DNS cache on each GKE node which answers queries locally, distributing the load and providing faster response times.

Managing IPs in VPC-native clusters

A VPC-native cluster uses the primary IP range for nodes and two secondary IP ranges for Pods and Services. The maximum number of nodes in VPC-native clusters can be limited by available IP addresses. The number of nodes is determined by both the primary range (node subnet) and the secondary range (Pod subnet). The maximum number of Pods and Services is determined by the size of the cluster's secondary ranges, Pod subnet and Service subnet, respectively.

By default:

  • The Pod secondary range defaults to /14 (262,144 IP addresses).
  • Each node has /24 range assigned for its Pods (256 IP addresses for its Pods).
  • The node's subnet is /20 (4092 IP addresses).

However, there must be enough addresses in both ranges (node and Pod) to provision a new node. With defaults, only 1024 can be created due to the number of Pod IPs.

By default there can be a maximum of 110 Pods per node, and each node in the cluster has allocated /24 range for its Pods. This results in 256 Pod IPs per node. By having approximately twice as many available IP addresses as possible Pods, Kubernetes is able to mitigate IP address reuse as Pods are added to and removed from a node. However, for certain applications which plan to schedule a smaller number of Pods per node, this is somewhat wasteful. The Flexible Pod CIDR feature allows per-node CIDR block size for Pods to be configured and use fewer IP addresses.

If you need more IP addresses than are available in the private space of with RFC 1918, then we recommend using Non-RFC 1918 addresses.

By default, the secondary range for Services is set to /20 (4,096 IP addresses), limiting the number of Services in the cluster to 4096.

Configuring nodes for better performance

GKE nodes are regular Google Cloud virtual machines. Some of their parameters, for example the number of cores or size of disk, can influence how GKE clusters perform.

Egress traffic

In Google Cloud, the machine type and the number of cores allocated to the instance determine its network capacity. Maximum egress bandwidth varies from 1 to 32 Gbps, while the maximum egress bandwidth for default e2-medium-2 machines is 2 Gbps. For details on bandwidth limits, see Shared-core machine types.

IOPS and disk throughput

In Google Cloud, the size of persistent disks determines the IOPS and throughput of the disk. GKE typically uses Persistent Disks as boot disks and to back Kubernetes' Persistent Volumes. Increasing disk size increases both IOPS and throughput, up to certain limits.

Each persistent disk write operation contributes to your virtual machine instance's cumulative network egress cap. Thus IOPS performance of disks, especially SSDs, also depend on the number of vCPUs in the instance in addition to disk size. Lower core VMs have lower write IOPS limits due to network egress limitations on write throughput.

If your virtual machine instance has insufficient CPUs, your application won't be able to get close to IOPS limit. As a general rule, you should have one available CPU for every 2000-2500 IOPS of expected traffic.

Workloads that require high capacity or large numbers of disks need to consider the limits of how many PDs can be attached to a single VM. For regular VMs, that limit is 128 disks with a total size of 64 TB, while shared-core VMs have a limit of 16 PDs with a total size of 3 TB. Google Cloud enforces this limit, not Kubernetes.

Limits and quotas relevant for scalability

VMs per VPC Network limit

A single VPC network can have up to 15,000 connected VMs, which exceeds the limit of 5,000 nodes per cluster in GKE 1.17 and earlier versions. For GKE 1.18 and later, which can support up to 15,000 nodes, a VPC-native cluster can have almost 15,000 nodes - "almost", because a small number of addresses must be reserved for GKE control plane needs.

Note that if you have configured network peering, the limit of 15,000 VMs will by default apply across peered networks.

Compute Engine API quota

GKE control planes use Compute Engine API to discover metadata of nodes in the cluster or to attach persistent disks to VM instances.

By default, Compute Engine API allows 2000 read requests per 100 seconds, which may not be sufficient for clusters with more than 100 nodes.

If you are planning to create a cluster with more than 100 nodes, we recommend increasing Read requests per 100 seconds and Read requests per 100 seconds per user quotas to at least 4000. You can change these settings in the Google Cloud Console.

Logging and monitoring quota

The quotas for the Cloud Logging API (default is 60,000 log insertion requests per minute) and the Cloud Monitoring API (default is 6,000 time series insertion rpm) may need to be increased as your cluster grows.

Node quota

Number of nodes is limited to 5,000 from GKE 1.18 onwards and can be increased on demand to 15,000. For more information, see the Clusters larger than 5000 nodes section.

Understanding limits

Kubernetes, as any other system, has limits which needs to be taken into account while designing applications and planning their growth.

GKE versions up to 1.17 support 5,000 nodes in a single cluster. GKE 1.18 and later supports up to 15,000 nodes. However, Kubernetes is a complex system with a large feature surface. The number of nodes is only one of many dimensions on which Kubernetes can scale. Other dimensions include the total number of Pods, Services, or backends behind a Service. For considerations relevant to creating large clusters, see the Clusters larger than 5000 nodes section.

You shouldn't stretch more than one dimension at the time. This stress can cause problems even in smaller clusters.

For example, trying to schedule 110 Pods per node in a 5,000-node cluster likely won't succeed because the number of Pods, the number of Pods per node, and the number of nodes would be stretched too far.

Dimension limits

You can read the official list of Kubernetes limits.

Neither this list nor the examples below form an exhaustive list of Kubernetes limits. Those numbers are obtained using a plain Kubernetes cluster with no extensions installed. Extending Kubernetes clusters with webhooks or CRDs is common but can constrain your ability to scale the cluster.

Most of those limits are not enforced, so you can go above them. Exceeding limits won't make the cluster instantly unusable. Performance degrades (sometimes shown by failing SLOs) before failure. Also, some of the limits are given for the largest possible cluster. In smaller clusters, limits are proportionally lower.

  • Number of Pods per node. GKE has a hard limit of 110 Pods per node. This assumes an average of two or fewer containers per Pod. Having too many containers might reduce the limit of 110 because some resources are allocated per container.

  • Total number of Services should stay below 10000. The performance of iptables degrades if there are too many services or if there is a high number of backends behind a Service.

  • Number of Pods behind a single Service is safe if kept below 250. Kube-proxy runs on every node and watches all changes to endpoints and Services. So the bigger cluster is, the more data has to be sent. The hard limit is around 5,000, depending on the lengths of the Pod names and their namespaces. If they are longer, more bytes are stored in etcd, where the Endpoint object exceeds the maximum size of etcd row. With more backend traffic updates become large, especially in large clusters (500+ nodes). You may go above this number slightly as long as churn of Pods behind the Service is kept at a minimum or the size of the cluster is kept small.

  • Number of Services per namespace should not exceed 5,000. After this, the number of Service environment variables outgrows shell limits, causing Pods to crash on startup. From Kubernetes 1.13 you can opt-out from having those variables populated by setting enableServiceLinks in PodSpec to false.

  • Total number of objects. There is a limit on all objects in a cluster, both built-in and CRDs. It depends on their size (how many bytes are stored in etcd); how frequently they change; and on access patterns (for example, frequently reading of all objects of a given type would stop etcd much faster).

Clusters larger than 5,000 nodes

As of release 1.18, GKE supports up to 15,000 nodes in a single cluster. It has to be said, though, that all above limits remain in place, in particular the limit on number of objects resulting from etcd limitations. The effect is, not every workload that was deployed on 5,000 nodes, will scale up to 15,000 nodes.

In addition, clusters larger than 5,000 nodes must be both regional and private.

Therefore, if you want to create a cluster larger than 5,000 nodes, you need to create a support ticket first, requesting quota increase above 5,000 nodes. Google team will then contact you to find out more about your workload and advise if it will scale as you need it to - or will recommend, how to modify the workload for better scalability. As part of this activity we will also help you in appropriately increasing other relevant quotas

Disable automounting default service account

When you create a Pod, it automatically has the default service account. You can use the default service account to access the Kubernetes API from inside a Pod using automatically mounted service account credentials.

For every mounted Secret, the kubelet watches the kube-apiserver for changes to that Secret. In large clusters this translates to thousands of watches and can put a significant load on the kube-apiserver. If the Pod is not going to access Kubernetes API, you should disable the default service account.

What's next?