About cluster configuration choices


This page explains the main cluster configuration choices you can make when creating a cluster in Google Kubernetes Engine (GKE), whether you're using the Google Cloud console, the Google Cloud CLI, or Terraform. These options let you customize a wide range of cluster attributes and behavior to meet your needs, from whether the cluster is accessible from public networks to how you want it to receive version upgrades.

Many of the options discussed in this guide can't be changed after a cluster is created. These include choices that affect a cluster's availability and network. If you do need to change these options, you must create a new cluster then migrate traffic to it, which might be disruptive.

Best practice:

Because many cluster configuration options can't be changed after cluster creation, plan and design your cluster configuration with your organization's Admins and architects, Cloud architects, Network administrators, or any other team responsible for defining, implementing, and maintaining the GKE and Google Cloud architecture.

This page is for Admins and architects who define IT solutions and system architecture in accordance with company strategy. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.

Before reading this page you should be familiar with the following, as well as basic Kubernetes concepts:

Level of cluster management

Before discussing cluster options, it's important to understand the level of flexibility, responsibility, and control that you require for your cluster. The level of control that you require determines the mode of operation to use in GKE, and the cluster configuration choices that you need to make.

When you create a cluster in GKE, you do so by using one of the following modes of operation:

  • Autopilot (recommended): Provides a fully-provisioned and managed cluster configuration. Autopilot clusters are pre-configured with an optimized cluster configuration that is ready for production workloads.

  • Standard: Provides advanced configuration flexibility over the cluster's underlying infrastructure. For clusters created using the Standard mode, you determine the configuration needed for your production workloads.

For more information about these modes, and to learn more about Autopilot, see GKE modes of operation and the Autopilot overview.

You can find a detailed side-by-side comparison of the two modes in Compare GKE Standard and Autopilot.

Cluster configuration choices

After you have chosen a mode of operation, you then select the configuration you require for your cluster. Because Autopilot clusters are more fully managed and configured by Google Cloud than Standard clusters, they have fewer available configuration choices.

Configuration options for all clusters fall into the following categories:

  • Name and other metadata: Each cluster must have an identifying name that is unique within its project. You can also optionally add a cluster description and labels.
  • Availability and scalability: Specify where you want your cluster control plane and nodes to run, and if you want multiple control plane replicas. All Autopilot clusters are regional, which means they have multiple control planes across multiple compute zones in a Google Cloud region.
  • Fleet membership: Choose whether you want your cluster to be a member of a fleet.
  • Networking: Networking options including the Virtual Private Cloud (VPC) network and subnet the cluster is in, and whether you want your cluster to be accessible from public networks.
  • Versions and upgrade management: Use release channels to choose your preferred balance between new features and stability when upgrading this cluster's software, and set maintenance windows and exclusions to choose when upgrades can and can't occur.
  • Security: This includes whether the cluster uses Workload Identity Federation for GKE and the service account used by the cluster's nodes to authenticate to Google Cloud.
  • Cluster features: Enable and configure additional GKE and Google Cloud features for this cluster, including backups and observability. Standard mode also lets you create short-lived alpha clusters to try out alpha Kubernetes features.

In addition to these, Standard clusters also have options in the following category:

  • Node pools: Specify details about your cluster's nodes, including node pools, node operating system, and node sizing.

The following sections look at some of these categories in more detail, particularly those with options where you can't change the setting after you create your cluster. For a complete list of configuration options, see Configuration reference.

The following table compares available options in some key areas for Autopilot and Standard clusters:

Cluster choices Mode
Autopilot Standard
Availability type Regional Regional or Zonal
Release channel Rapid, Regular, or Stable Any channel
Cluster versions Default or another available version Default or another available version
Network routing VPC-native VPC-native or Routes-based
Network isolation Customizable Customizable
Kubernetes features Production Production or Alpha

Cluster availability type

With GKE, you can create a cluster tailored to the availability requirements of your workload and your budget. You can choose between regional clusters that have multiple control plane replicas across multiple compute zones in a Google Cloud region, or zonal clusters with a single control plane in a single zone. Autopilot clusters are always regional.

To help you choose which cluster type to create in Standard mode, see Choosing a regional or zonal control plane.

These settings can't be updated after cluster creation: a zonal cluster can't become a regional cluster, and a regional cluster can't become a zonal cluster.

Best practice:

For production workloads, use regional clusters because they offer higher availability than zonal clusters. For development environments, use regional clusters with zonal node pools. A cluster with a regional control plane and zonal node pools has the same costs as a zonal cluster.

Regional clusters

A regional cluster has multiple replicas of the control plane, running in multiple zones within your specified Google Cloud region. You always need to specify a region when creating an Autopilot or other regional cluster.

For regional Standard clusters only, you can also choose in which zones the cluster's nodes run. Nodes in a regional cluster can run in multiple zones or a single zone depending on the configured node locations. By default, GKE replicates each node pool across three zones of the control plane's region. When you create a Standard cluster or when you add a new node pool, you can change the default configuration by specifying the zone(s) in which the cluster's nodes run. All zones must be within the same region as the control plane.

Use regional clusters to run your production workloads, as they offer higher availability than zonal clusters.

You can't change a regional cluster's region after cluster creation.

Zonal clusters (Standard clusters only)

Zonal clusters have a single control plane in a single zone. Depending on your workload availability requirements, you can choose to distribute your nodes for your zonal cluster in a single zone or in multiple zones.

To create a zonal cluster, see Creating a zonal cluster.

Single-zone clusters

A single-zone cluster has a single control plane running in one zone. This control plane manages workloads on nodes running in the same zone. If you run a workload in a single zone, this workload is unavailable in the event of a zonal outage.

Multi-zonal clusters

A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads can't be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster. If you run a workload in multiple zones and there is a zonal outage, the workload is disrupted in that zone but remains available in other zones.

Fleet membership

If your organization uses multiple clusters, you can simplify multi-cluster management by adding the clusters to a fleet: a logical grouping of Kubernetes clusters. Creating a fleet helps your organization uplevel management from individual clusters to entire groups of clusters, and lets you use fleet-enabled features such as Multi Cluster Ingress, Config Sync, and Policy Controller.

Although you can add clusters to a fleet at any time, if you have enabled GKE Enterprise we strongly recommend registering new clusters to a fleet during cluster creation. This is because these "born in the fleet" clusters are created with your chosen fleet-level default settings for a number of enterprise features, and with recommended logs and metrics already enabled. You can learn more about these in the following guides:

This setting can be updated after cluster creation to register or unregister the cluster, although we don't recommend moving clusters with live workloads from one fleet to another.

You can learn much more about adding clusters to fleets in Create fleets to simplify multi-cluster management.

Networking settings

When creating a GKE cluster, you can specify a number of networking settings, including the network the cluster is in, the network routing mode, and whether you want your cluster nodes to be accessible from public networks.

If you are not a network administrator, you should consult with your organization's Networking specialists before creating a production-ready cluster, as many of these options can't be changed after cluster creation. If you are a network administrator, you can find out much more about GKE networking in About GKE networking, with best practices for networking options in Best practices for GKE networking. This section describes only a subset of our possible networking options.

Network and subnet

The Virtual Private Cloud (VPC) network that a cluster is in determines which other Compute Engine resources it is able to communicate with. By default, GKE clusters are created in your project's default network, though you can choose another network if you or your administrator have created one. If available, you can specify that you want your cluster to belong to a specific VPC subnet: again otherwise the default subnet is used. You can also optionally specify that you want to use a particular IP address range within that subnet for your Pods and Services.

These settings can't be updated after cluster creation.

Network isolation choices

You can customize network isolation in your cluster by considering the following two aspects:

  • Access to the control plane: By default, both the internal endpoint and external endpoint of the control plane are enabled, and the DNS-based endpoint is disabled. You can choose to:

    • Disable both the external and internal endpoint and use only the DNS endpoint.
    • Disable the external endpoint only to prevent access to external clients.
    • Enable authorized networks to control which IP addresses reach the control plane endpoints.
  • Cluster networking configuration: You can choose to enable private nodes in your cluster to completely isolate workloads from public networks. You can enable private nodes for entire clusters or at the node pool (for Standard) or workload (for Autopilot) level. Enabling private nodes at the node pool or workload level overrides any node configuration at the cluster level.

These settings can be changed after cluster creation.

For more information on network isolation, see About customizing network isolation.

Best practice:

Use Cloud NAT to provide GKE Pods with access to resources with public IP addresses. Cloud NAT improves the overall security posture of the cluster because Pods are not directly exposed to the internet, but are still able to access internet-facing resources.

VPC-native and routes-based clusters

In GKE, clusters can be distinguished according to the way they route traffic from one Pod to another Pod. A cluster that uses Alias IP addresses is called a VPC-native cluster. A cluster that uses Google Cloud routes is called a routes-based cluster.

By default, all new GKE clusters use VPC-native routing, which is our recommended option. You can change this setting at cluster creation to create a routes-based cluster in Standard mode only. This setting can't be updated after cluster creation.

You can learn more about VPC-native clusters and their benefits, including any special requirements they have, in VPC-native clusters.

Best practice:

Use the VPC-native network mode for your clusters. This is the default for Autopilot clusters.

Versions and upgrades

With release channels, GKE picks software versions for a cluster with your chosen balance between feature availability and stability. When you create a cluster, you can choose which release channel you want. New clusters (both Autopilot and Standard) are enrolled in the Regular release channel by default, though you can choose a specific version during cluster creation if required.

Autopilot clusters always use release channels. Standard clusters use release channels by default, however you can choose not to enroll your cluster in a release channel (although we don't recommend this because this setting gives you more limited access to cluster features).

GKE automatically upgrades all clusters over time, regardless of release channel enrollment. GKE automatically upgrades the cluster's control plane and its nodes as new versions become available in that release channel. You can control the timing and scope of upgrades with maintenance windows and exclusions.

You can change a cluster's release channel at any time.

For information regarding upcoming automatic upgrades, see the GKE release notes.

Best practice:

Choose a release channel for GKE to select versions for your cluster with your chosen balance between feature availability and stability. Use maintenance windows and exclusions to control the timing and scope of automatic upgrades.

Alpha features (Standard clusters only)

New features in Kubernetes are listed as Alpha, Beta, or Stable, depending upon their status in development. In most cases, Kubernetes features that are listed as beta or stable are included with GKE clusters.

If you want to experiment with very new features that are not production ready, alpha features are available in special GKE alpha clusters. An alpha cluster has all Kubernetes alpha APIs (sometimes called feature gates) enabled. You can use alpha clusters for early testing and validation of Kubernetes features. Alpha clusters are not supported for production workloads, can't be upgraded or added to release channels, and expire within 30 days.

Alpha features are not available for Autopilot clusters.

To create an alpha cluster, see Creating an alpha cluster.

Security settings

GKE has a number of security settings that you can specify at cluster creation. These include encryption settings, security features such as Binary Authorization, the service account you want to use for the cluster's nodes (discussed in more detail in the next section), and whether your cluster uses Workload Identity Federation for GKE.

As with other settings, you should consult with expert colleagues — in this case, your organization's Security specialists — before creating a production-ready cluster. You can learn much more about GKE security in our Security overview and Harden your cluster security.

Service account for nodes

Each GKE node has a service account that it uses to authenticate to Google Cloud APIs and services. For Autopilot clusters, you specify this for the entire cluster. For Standard clusters, you specify this for each node pool. By default, GKE uses the Compute Engine default service account for nodes, though you can specify another custom service account.

This setting cannot be changed after cluster (for Autopilot) or node pool (for Standard) creation. Depending on the service account you choose or your org's policies your choice might affect your clusters ability to use Google Cloud services, including writing metrics, although you can grant the service account additional permissions if you have the necessary permissions to do so yourself.

Best practice:

For improved security, don't use the Compute Engine default service account for your nodes if it has the IAM Editor role (the default behavior for orgs created before May 2024), as it has more permissions than you need to run a cluster. Instead, either modify the default service account's permissions, or use a custom service account with less permissive settings. Your org may have already specified that default service accounts don't get Editor permissions, in which case you or your Identity and account admins can grant appropriate roles to the default service account.

Node pool settings (Standard clusters only)

As you know from the Cluster administration overview and GKE modes of operation, if you use Autopilot for your clusters you don't need to worry about node configuration because GKE configures your nodes for you. Autopilot cluster nodes are all fully managed by GKE and all use the same node operating system (OS).

If you choose to create a Standard cluster, you can specify a number of node options when creating a cluster, including:

  • The name, number, size, and location of node pools you want to use; node pools are groups of nodes within your cluster that share a common configuration.
  • The node OS you want to use for new nodes.
  • Whether you want to use ephemeral spot VMs for nodes.
  • The Compute Engine machine type you want to use for nodes.
  • The service account you want to use for nodes, as described in Security settings.

Some of a cluster's node pool settings can be changed after cluster creation, though all Standard clusters require at least one node pool. You can learn much more about node pool configuration in Add and manage node pools.

Configuration reference

Find a complete list of possible configuration options in the following reference guides:

What's next