GKE On-Prem runs in your data center in a vSphere environment. This topic describes requirements for your vSphere environment, including storage, CPU, RAM and virtual networks.
For an introduction to GKE On-Prem and a technical overview of the product, see GKE On-Prem overview.
After you've finished reading this page, refer to:
In addition to the minimum requirements for running ESXI, consider the physical CPU and RAM requirements that you will need to support the virtual machines (VMs) in your GKE On-Prem clusters.
VMware vSphere is a suite of virtualization software that includes VMware vCenter Server, a management tool, and VMware ESXi, a type-1 hypervisor. You install GKE On-Prem to a VMware vSphere 6.5 cluster that you have configured in your data center. To learn about installing vSphere, refer to Overview of the vSphere Installation and Setup Process in the VMware documentation.
License edition and version requirements
GKE On-Prem requires VMware vSphere 6.5. You need, the following VMware licenses:
A vSphere Enterprise Plus or vSphere Standard license. The Enterprise Plus license is recommended, because it allows you to enable the VMware Distributed Resource Scheduler (DRS). With DRS enabled, VMware automatically distributes your GKE cluster nodes across physical hosts in your data center.
Pricing for this license is based on the number of physical CPUs on your ESXi hosts. This license is perpetual; that is, it never expires. Along with this license, you must purchase a support subscription for at least one year.
- A vCenter Server Standard license. Pricing for this license is based on the number of VMs that run the server. Along with this license, you must also purchase a support subscription for at least one year. See VMware vCenter Server Editions on the VMware pricing page.
vCenter user account permissions
The vSphere user account you use to install GKE On-Prem clusters needs to have sufficient permissions. vCenter's Administrator role provides its users complete access to all vCenter objects.
Alternatively, you can choose to create a custom role with the minimum set of permissions required, detailed in the table below.
Click the disclosure arrow to reveal the table of required vSphere permissions
|Cloud Native Storage||
|Root vCenter Server||
To learn how to manage permissions, refer to Managing Permissions for vCenter Components.
Resource requirements for admin and user clusters
When you install GKE On-Prem, you create two GKE clusters:
- An admin cluster
- A user cluster
The admin cluster runs the GKE On-Prem infrastructure, and the user cluster runs your workloads. After the initial installation, you can create additional user clusters.
Your vSphere environment must have enough storage, CPU, and RAM resources to fulfill the needs of your admin and user clusters. The resource needs of your user clusters depend on the type of workloads you intend to run.
Storage, vCPU, and RAM requirements for the admin cluster
The physical ESXi hosts in your datacenter must provide enough CPU and RAM to fulfill the needs of your admin cluster. Also, your vSphere environment must provide enough storage to fulfill the needs of the admin cluster. The admin cluster has the following storage needs:
A 40 GB virtual disk for each node. You can change this value in the cluster configuration file, but typically the default value of 40 GB is sufficient.
A 100 GB virtual disk to store object data.
950 GB of virtual disk space to fulfill PersistentVolumeClaims (PVCs) created by Prometheus and Stackdriver. Prometheus needs enough storage for a few days of metrics, and Stackdriver needs storage to buffer logs during a network outage.
The following table describes the storage, vCPU, and RAM requirements for nodes in the admin cluster:
|Name||System prefix||Configuration field||Specifications||Purpose|
|Admin cluster control plane||
Runs the admin control plane in the admin cluster.
Two VMs, each of which has the following requirements:
Run the admin control plane's add-ons in the admin cluster.
|User cluster control plane||
For each user cluster, one or three VMs. Each VM has the following requirements:
Each user cluster has its own control plane. User control plane VMs run in the admin cluster. You can choose to create one or three user control planes for an individual user cluster.
Storage, vCPU, and RAM requirements for a user cluster
For each user cluster that you intend to create, the physical ESXi hosts in your data center must provide enough CPU and RAM to fulfill the needs of the user cluster. Also, for each user cluster that you intend to create, your vSphere environment must provide enough storage to fulfill the needs of the cluster. A user cluster has the following storage needs:
A virtual disk for each node. The default size is 40 GB, but you change this value in the cluster configuration file, depending on the storage needs of the workloads you intend to run.
950 GB of storage to fulfill PVCs created by Prometheus and Stackdriver. Prometheus needs enough storage for a few days of metrics, and Stackdriver needs storage to buffer logs during a network outage.
The following table describes default values for storage, CPU, and RAM for each node in a user cluster. Depending on the needs of your workloads, you might want to adjust the values. You can specify values for storage, CPU, and RAM when you create a cluster configuration file:
|Name||System prefix||Configuration field||Specifications||Purpose|
|User cluster worker nodes||
These are the default values for an individual worker node:
A user cluster node is a virtual machine where workloads run. When you create a user cluster, you decide how many nodes it should run. The configuration required for each node depends on the workloads you run.
For information on the maximum number of clusters and nodes you can create, see Quotas and limits.
Example of storage, vCPU, and RAM requirements
Suppose you intend to create the following clusters:
An admin cluster
A user cluster where you think each node will need 40 GB of disk space, 6 vCPU, and 16384 MB of RAM. This user cluster will have 20 nodes. You want the control plane for this user cluster to be highly available, so there will be three nodes in the admin cluster that run control plane components for this user cluster.
A second user cluster where you think the default storage, vCPU, and RAM values will be appropriate. This user cluster will have eight nodes. You do not need the control plane for this user cluster to by highly available, so there will be only one node in the admin cluster that runs control plane components for this user cluster.
The admin cluster has a control plane node, two nodes for add-ons, three nodes for the control plane of your first user cluster, and one node for the control plane of your second user cluster. So the admin cluster has seven nodes.
Each node in the admin cluster requires 40 GB of disk space and 4 vCPU. Three of the admin cluster nodes require 16384 MB of RAM, and four of the admin cluster nodes require 8192 MB of RAM. The admin cluster needs a 100 GB persistent disk to store its object data. Also, the admin cluster requires 950 GB of disk space to fulfill PVCs created by Stackdriver and Prometheus.
The following table summarizes the storage, vCPU, and RAM requirements for the admin cluster:
|Example: Admin cluster requirements|
|Storage||7 x 40 + 100 + 950||1330 GB|
|vCPU||7 x 4||28 vCPU|
|RAM||3 x 16384 + 4 x 8192||81920 MB|
Each node in the first user cluster requires 40 GB of disk space, 6 vCPU, and 16384 MB of RAM. Also, the first user cluster requires 950 GB of disk space to fulfill PVCs created by Stackdriver and Prometheus.
The following table summarizes the storage, vCPU, and RAM requirements for the first user cluster:
|Example: First user cluster requirements|
|Storage||20 x 40 + 950||1750 GB|
|vCPU||20 x 6||120 vCPU|
|RAM||20 x 16384||327680 MB|
Each node in the second user cluster requires 40 GB of disk space, 4 vCPU, and 8192 MB of RAM. Also, the second user cluster requires 950 GB of disk space to fulfill PVCs created by Stackdriver and Prometheus.
The following table summarizes the storage, vCPU, and RAM requirements for the second user cluster:
|Example: Second user cluster requirements|
|Storage||8 x 40 + 950||1270 GB|
|vCPU||8 x 4||32 vCPU|
|RAM||8 x 8192||65536 MB|
|Example: Total requirements|
All of the VMs that get created as part of your GKE On-Prem infrastructure are connected to a vSphere virtual network in your data center. Your vSphere virtual network must meet the following requirements:
Your network must have a running vCenter Server.
Your network must be capable of supporting a load balancer. For information about setting up a load balancer, see Setting up your load balancer for GKE On-Prem.
Your network must be capable of supporting a set of VMs that get created when you install GKE On-Prem.
The vCenter Server, the load balancer, and all of the VMs that get created as part of your GKE On-Prem clusters must be able to route to each other.
If you choose to use the F5 BIG-IP load balancer, you need to have a user role that has sufficient permissions to set up and manage the load balancer. Either the Administrator role or the Resource Administrator role is sufficient. For more information, see F5 BIG-IP account permissions.
Allowlisting Google and HashiCorp addresses for your proxy
If your organization requires Internet access to pass through a firewall or an HTTP proxy server, allowlist the following Google addresses in your firewall and proxy server:
- vCenter server IP address
For information about setting up your firewall rules, see Firewall rules.
Network Time Protocol
All VMs in your vSphere environment must use the same Network Time Protocol (NTP) server.
If your admin workstation and cluster nodes get their IP addresses from a
DHCP server, you can configure the DHCP server to provide the address of an NTP
server. If DHCP does not specify an NTP server, GKE On-Prem uses
ntp.ubuntu.com by default.
GKE On-Prem limitations
As you plan your resource needs, be aware of the limits on the number of clusters and nodes. The following table summarizes some of the limits for GKE On-Prem.
|Maximum and minimum limits for clusters and nodes||
See Quotas and limits. Your environment's performance might impact these limits.
|Unique names for user clusters||In a Google Cloud project, each user cluster must have a unique name.|
|Cannot deploy to more than one vSphere data center||
A GKE On-Prem installation cannot span more than one vSphere datacenter. That is, an admin cluster and its associated user clusters must all run in the same datacenter.
|Cannot declaratively change cluster configurations after creation||While you can create additional clusters and resize existing clusters, you cannot change an existing cluster through its configuration file.|