By default, a cluster's control plane (master) and nodes all run in a single compute zone that you specify when you create the cluster. Regional clusters increase the availability of both a cluster's control plane (master) and its nodes by replicating them across multiple zones of a region. This provides the advantages of multi-zonal clusters, with the following additional benefits:
- If one or more (but not all) zones in a region experience an outage, the cluster's control plane remains accessible as long as one replica of the control plane remains available.
- During cluster maintenance such as a cluster upgrade, only one replica of the control plane is unavailable at a time, and the cluster is still operational.
By default, the control plane and each node pool is replicated across three zones of a region, but you can customize the number of replicas.
You cannot modify whether a cluster is zonal, multi-zonal, or regional after creating the cluster.
How regional clusters work
Regional clusters replicate cluster masters and nodes across multiple zones
within a single region.
For example, a regional cluster in the
us-east1 region creates replicas of the
control plane and nodes in three
us-east1-d. In the event of an
infrastructure outage, your workloads continue to run, and nodes
can be rebalanced manually or by using the
Benefits of using regional clusters include:
- Resilience from single zone failure. Regional clusters are available across a region rather than a single zone within a region. If a single zone becomes unavailable, your Kubernetes control plane and your resources are not impacted.
- Continuous master upgrades, master resize, and reduced downtime from master failures. With redundant replicas of the control plane, regional clusters provide higher availability of the Kubernetes API, so you can access your control plane even during upgrades.
By default, regional clusters consist of nine nodes (three per zone) spread evenly across three zones in a region. This consumes nine IP addresses. You can reduce the number of nodes down to one per zone, if desired. Newly created Google Cloud accounts are granted only eight IP addresses per region, so you may need to request an increase in your quotas for regional in-use IP addresses, depending on the size of your regional cluster. If you have too few available in-use IP addresses, cluster creation fails.
For regional clusters that run GPUs, you must either choose a region that has GPUs in three zones, or specify zones using the
--node-locationsflag. Otherwise, you may see an error like the following:
ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message= (1) accelerator type "nvidia-tesla-k80" does not exist in zone us-west1-c. (2) accelerator type "nvidia-tesla-k80" does not exist in zone us-west1-a.
For a complete list of regions and zones where GPUs are available, refer to GPUs on Compute Engine.
You can't create node pools in zones outside of the cluster's zones. However, you can change a cluster's zones, which causes all new and existing nodes to span those zones.
Using regional clusters requires more of your project's
regional quotas than a similar zonal or multi-zonal
cluster. Ensure that you understand your quotas and
before using regional clusters. If you encounter an
Insufficient regional quota to satisfy request for resource error, your
request exceeds your available quota in the current region.
Also, you are charged for node-to-node traffic across zones. For example, if a workload running in one zone needs to communicate with a workload in a different zone, the cross-zone traffic incurs cost. For more information, see Egress between zones in the same region (per GB) in the Compute Engine pricing page.
Persistent storage in regional clusters
Zonal persistent disks are zonal resources and regional persistent disks are multi-zonal resources. When adding persistent storage unless a zone is specified, GKE assigns the disk to a single, random zone. To learn how to control the zones, see Zones in persistent disks.
Autoscaling regional clusters
Keep the following considerations in mind when using the cluster autoscaler to automatically scale node pools in regional clusters.
Overprovisioning scaling limits
To maintain capacity in the unlikely event of zonal failure, you can allow GKE to overprovision your scaling limits, to guarantee a minimum level of availability even when some zones are unavailable.
For example, if you overprovision a three-zone cluster to 150% (50% excess capacity), you can ensure that 100% of traffic is routed to available zones if one-third of the cluster's capacity is lost. In the preceding example, you would accomplish this by specifying a maximum of six nodes per zone rather than four. If one zone fails, the cluster scales to 12 nodes in the remaining zones.
Similarly, if you overprovision a two-zone cluster to 200%, you can ensure that 100% of traffic is rerouted if half of the cluster's capacity is lost.