This page describes the best practices you can follow when planning and designing very large-size clusters.
Why plan for large GKE clusters
Every computer system including Kubernetes has some architectural limits. Exceeding the limits may affect the performance of your cluster or in same cases even cause downtimes. Follow the best practices and execute recommended actions to ensure your clusters run your workloads reliably at scale.
Best practices for splitting workloads between multiple clusters
You can run your workloads on a single, large cluster. This approach is easier to manage, more cost efficient, and provides better resource utilization than multiple clusters. However, in some cases you need to consider splitting your workload into multiple clusters:
- Review Multi-cluster use cases to learn more about general requirements and scenarios for using multiple clusters.
- In addition, from the scalability point of view, split your cluster when it could exceed one of the limits described in the section below or one of GKE quotas. Lowering any risk to reach the GKE limits, reduces the risk of downtime or other reliability issues.
If you decide to split your cluster, use Fleet management to simplify management of a multi-cluster fleet.
Limits and best practices
To ensure that your architecture supports large-scale GKE clusters, review the following limits and related best practices. Exceeding these limits may cause degradation of cluster performance or reliability issues.
These best practices apply to any default Kubernetes cluster with no extensions installed. Extending Kubernetes clusters with webhooks or custom resource definitions (CRDs) is common but can constrain your ability to scale the cluster.
|GKE limit||Description||Best practices|
|The etcd database size||The maximum size of the etcd database is 6 GB. If you are running a very large cluster with tens of thousands of resources, your etcd instances might exceed this limit. If you cross this limit, GKE might mark your etcd instances as unhealthy. This causes the clusters control plane to be unresponsive.||Ensure your etcd instances stay under 6 GB to avoid losing access to the control plane. If you cross this limit, you might need to contact Google Cloud support.|
|Total size of etcd objects per type||The total size of all objects of the given resource type should not exceed 800 MB. For example, you can create 750 MB of Pod instances and 750 MB of Secrets, but you cannot create 850 MB of Secrets. If you create more than 800 MB of objects, this could lead your Kubernetes or customized controllers to fail to initialize and cause disruptions.||
Keep the total size of all objects of each type stored in etcd below 800 MB. This is especially applicable to clusters using many large-sized Secrets or ConfigMaps, or a high volume of CRDs.
|Number of Services||The performance of iptables used by kube-proxy degrades if any of the following occurs:
Keep the number of Services in the cluster below 10,000.
To learn more, see Exposing applications using services.
|Number of Services per namespace||The number of environment variables generated for Services might outgrow shell limits. This might cause Pods to crash on startup.||
Keep the number of Services per namespace below 5,000.
You can opt-out from having those environment variables populated. See the documentation for how to set
To learn more, see Exposing applications using Services.
|Number of Pods behind a single Service||
Every node runs a kube-proxy or GKE Dataplane V2 agent (anetd) that uses watches for monitoring any Service change. The larger a cluster, the more change-related data the agent processes. This is especially visible in clusters with more than 500 nodes.
In GKE version 1.18 and earlier, data is propagated using Endpoint objects, which do not scale well.
version 1.19 and later, information about the endpoints is split
In GKE 1.22 and later, Endpoint objects are still available for components, but any endpoint above 1,000 Pods is automatically truncated.
In GKE 1.21 and earlier, keep the number of Pods behind a single Service lower than 1,000.
In GKE 1.22 and later, keep the number of Pods behind a single Service lower than 10,000.
The GKE version requirement applies to both the nodes and the control plane.
To learn more, see exposing applications using services.
|Number of all Service endpoints||The number of endpoints across all Services may hit limits. This may increase programming latency or result in an inability to program new endpoints at all.||
Keep the number of all endpoints in all services below 64,000.
GKE Dataplane V2, which is becoming a default GKE date plane, relies on eBPF maps that are currently limited to 64,000 endpoints across all Services.
|Number of Horizontal Pod Autoscaler objects per cluster||
Each Horizontal Pod Autoscaler (HPA) is processed every 15 seconds.
In GKE 1.21 and earlier, more than 100 HPA objects can cause linear degradation of performance.
In GKE 1.22 and later, more than 300 HPA objects can cause linear degradation of performance.
Keep the number of HPA objects within these limits; otherwise you might experience linear degradation of frequency of HPA processing. For example in GKE 1.22 with 2,000 HPAs, a single HPA will be reprocessed every 1 minute and 40 seconds.
|Number of Pods per node||GKE has a hard limit of 256 Pods per node. This assumes an average of two or fewer containers per Pod. If you increase the number of containers per Pod, this limit might be lower because GKE allocates more resources per container.||
If possible, upgrade your cluster version to 1.23.8-gke.400 or later. GKE versions 1.23.8-gke.400 and later contain memory improvements for GKE components. These improvements enable you to set maximum Pods per node over the default limit of 110.
We recommend you use worker nodes with at least one vCPU per each 10 pods.
To learn more, see manually upgrading a cluster or node pool.
|Rate of pod changes||
Kubernetes has internal limits that impact the rate of creating or deleting Pods (Pods churn) in response to scaling requests. Additional factors like deleting a pod that is a part of a Service also can impact this Pod churn rate.
For clusters with up to 500 nodes, you can expect an average rate of 20 pods created per second and 10 pods deleted per second.
For clusters larger than 500 nodes, you can expect an average rate of 100 pods created per second and 50 pods deleted per second.
In GKE version 1.23 and later, the Pod deletion performance is faster. Deletion rates are similar to Pod creation rates.
Take the Pod creation and deletion rate limit under consideration when planning how to scale your workloads.
If pod deletion throughput is of concern, consider upgrading your cluster version to 1.23 or later.
Pods share the same deletion throughput with other resource types (for example, EndpointSlices). You can reduce deletion throughput when you define Pods as part of a Service.
|Number of open watches||
Having more than 200,000 watches per cluster might affect the initialization time of the cluster. This issue can cause the control plane to frequently restart.
Define larger nodes to decrease the likelihood and severity of issues caused by a large number of watches. Higher pod density (fewer large-sized nodes) might reduce the number of watches and mitigate the severity of the issue.
To learn more, see the machine series comparison.
|Number of Secrets per cluster if application-layer secrets encryption is enabled||A cluster must decrypt all Secrets during cluster startup when application-layer secrets encryption is enabled. If you store more than 30,000 secrets, your cluster might become unstable during startup or upgrades, causing workload outages.||
Store less than 30,000 Secrets when using application-layer secrets encryption.
To learn more, see encrypt secrets at the application layer.
|Log bandwidth per node||
There is a limit on the maximum amount of logs sent by each node to Cloud Logging API. The default limit varies between 100 Kbps and 500 Kbps depending on the load. You can lift the limit to 10 Mbps by deploying a high-throughput Logging agent configuration. Going beyond this limit may cause log entries to be dropped.
Configure your Logging to stay within the default limits or configure a high throughput Logging agent.
To learn more, see Increasing Logging agent throughput.
|Backup for GKE limits||
You can use Backup for GKE to backup and restore your GKE workloads.
Backup for GKE is subject to limits that you need to keep in mind when defining your backup plans.
Review the limits of Backup for GKE.
If it's possible for your workload to exceed these limits, we recommend creating multiple backup plans to partition your backup and stay within the limits.