This page explains how regional clusters work in Google Kubernetes Engine (GKE).
Regional clusters increase the availability of a cluster by replicating the control plane and nodes across multiple zones in a region.
Regional clusters provide all of the advantages of multi-zonal clusters with the following additional benefits:
- Resilience from single zone failure: Regional clusters are available across a region rather than a single zone within a region. If a single zone becomes unavailable, your control plane and your resources are not impacted.
- Continuous control plane upgrades, control plane resizes, and reduced downtime from control plane failures. With redundant replicas of the control plane, regional clusters provide higher availability of the Kubernetes API, so you can access your control plane even during upgrades.
GKE Autopilot clusters are always regional. If you use GKE Standard, you can choose to create regional, zonal, or multi-zonal clusters. To learn about the different cluster availability types, see About cluster configuration choices.
In regional clusters, including Autopilot clusters, the control plane is replicated across three zones of a region. GKE automatically replicates nodes across zones in the same region. In Standard clusters and node pools, you can optionally manually specify the zone(s) in which the nodes run. All zones must be within the same region as the control plane.
After creating a regional cluster, you cannot change it to a zonal cluster.
How regional clusters work
Regional clusters replicate the cluster's control plane and nodes across multiple zones
within a single region.
For example, using the default configuration, a regional cluster in the
us-east1
region creates replicas of the control plane and nodes in three
us-east1
zones: us-east1-b
, us-east1-c
, and us-east1-d
. In the event of
an infrastructure outage, Autopilot workloads continue to run and
GKE automatically rebalances nodes.
If you use Standard clusters, you must rebalance nodes manually or
by using the
cluster autoscaler.
Limitations
The default node pool created for regional Standard clusters consists of nine nodes (three per zone) spread evenly across three zones in a region. This consumes nine IP addresses for clusters using public nodes. You can reduce the number of nodes down to one per zone, if needed. Newly created Cloud Billing accounts are granted only eight IP addresses per region, so you may need to request an increase in your quotas for regional in-use IP addresses, depending on the size of your regional cluster. If you have too few available in-use IP addresses, cluster creation fails.
To run GPUs in your regional cluster, choose a region that has at least one zone where the requested GPUs are available. You must use the
--node-locations
flag when creating the node pool to specify the zone or zones containing the requested GPUs.If the region you choose doesn't have at least one zone where the requested GPUs are available, you might see an error like the following:
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message= Accelerator type "nvidia-l4" does not exist in zone europe-west3-a.
For a complete list of regions and zones where GPUs are available, see GPUs on Compute Engine.
Zones for Standard mode node pools must be in the same region as the cluster's control plane. If you need to, you can change a cluster's zones, which causes all new and existing nodes to span those zones.
Pricing
All Autopilot clusters are regional, and are subject to the Autopilot pricing model.
In Standard mode, regional clusters require more of your project's
regional quotas than a similar zonal or multi-zonal
cluster. Ensure that you understand your quotas and
Standard pricing
before using regional clusters. If you encounter an
Insufficient regional quota to satisfy request for resource
error, your
request exceeds your available quota in the current region.
Also, you are charged for node-to-node traffic across zones. For example, if a workload running in one zone needs to communicate with a workload in a different zone, the cross-zone traffic incurs cost. For more information, see Egress between zones in the same region (per GB) in the Compute Engine pricing page.
Persistent storage in regional clusters
Zonal persistent disks are zonal resources and regional persistent disks are multi-zonal resources. When adding persistent storage unless a zone is specified, GKE assigns the disk to a single, random zone. To learn how to control the zones, see Zones in persistent disks.
Autoscaling regional clusters
Keep the following considerations in mind when using the cluster autoscaler to automatically scale node pools in regional Standard mode clusters.
You can also learn more about Autoscaling limits for regional clusters or about how Cluster Autoscaler balances across zones.
These considerations only apply to Standard mode clusters with the cluster autoscaler.
Overprovisioning scaling limits
To maintain capacity in the unlikely event of zonal failure, you can allow GKE to overprovision your scaling limits, to ensure a minimum level of availability even when some zones are unavailable.
For example, if you overprovision a three-zone cluster to 150% (50% excess capacity), you can ensure that 100% of traffic is routed to available zones if one-third of the cluster's capacity is lost. In the preceding example, you would accomplish this by specifying a maximum of six nodes per zone rather than four. If one zone fails, the cluster scales to 12 nodes in the remaining zones.
Similarly, if you overprovision a two-zone cluster to 200%, you can ensure that 100% of traffic is rerouted if half of the cluster's capacity is lost.
You can learn more about the cluster autoscaler or read the FAQ for autoscaling in the Kubernetes documentation.
What's next
- Create a regional cluster.
- Learn more about the different types of clusters.
- Learn more about node pools.
- Learn more about cluster architecture.