System requirements

GKE On-Prem requires the following:

  • A VMware vSphere 6.5 environment, which includes vSphere 6.5, vCenter 6.5, and vSphere ESXI 6.5.
  • A layer 4 network load balancer. By default, GKE On-Prem integrates with F5 BIG-IP versions 12.x or 13.x. You can also choose to enable manual load balancing and use your own L4 load balancer.

Before you begin, you should read the GKE On-Prem overview.

Hardware requirements

GKE On-Prem allows you to choose from a wide array of hardware options. To learn about hardware requirements inherited from vSphere, see VMware/vSphere 6.5 hardware specification.

Google is working with several OEM partners to provide validated hardware solutions. Choose options from these partners to get both known working solutions and the benefit of collaborative support (whereby Google and our partners work jointly to resolve installation issues).

Our OEM partners currently include the following. As new platforms and partners join the GKE Enterprise family of solutions, this list will be updated:

  • Cisco
  • Dell
  • HPE
  • Intel
  • Lenovo

vSphere requirements

You install GKE On-Prem to a VMware vSphere 6.5 environment. To learn about installing vSphere, refer to Overview of the vSphere Installation and Setup Process in the VMware documentation.

vSphere 6.5 includes the following components:

Follow the sections below to learn how to configure an existing vSphere installation for use with GKE On-Prem.

License edition and version requirements

Currently, GKE On-Prem requires VMware vSphere 6.5. Specifically, you need the following licensed VMware products:

  • VMware ESXI 6.5, with an Enterprise Plus license edition, installed on each of the hosts in your data center.
  • VMware vCenter 6.5, with a Standard license edition, installed on one host in your data center.

For more information, see the following VMware resources:

vCenter user account privileges

The vSphere user account you use to install GKE On-Prem clusters needs to have sufficient privileges. vCenter's Administrator role provides its users complete access to all vCenter objects.

Alternatively, you can choose to create a custom role with the minimum set of privileges required, detailed in the table below.

To learn how to manage permissions, refer to Managing Permissions for vCenter Components.

Virtual machines created during installation

During a new installation, GKE On-Prem creates several virtual machines (VMs, or "nodes") in vSphere. The following table describes these VMs' specifications and their purpose.

Additionally, GKE On-Prem creates a 100GB persistent disk to store the admin cluster's data.

Admin cluster

Name System prefix Configuration field Specifications Purpose
Admin cluster control plane gke-admin-master N/A
  • 4 vCPU
  • 16384 MB RAM
  • 40 GB hard disk space

Runs the admin control plane in the admin cluster.

Add-ons VMs gke-admin-node N/A

Two VMs running with the following specifications:

  • 4 vCPU
  • 16384 MB RAM
  • 40 GB hard disk space

Run the admin control plane's add-ons in the admin cluster.

User cluster control plane [USER_CLUSTER_NAME]-user-N usercluster.masternode
  • 4 vCPU
  • 8192 MB RAM
  • 40 GB hard disk space

Each user cluster has its own control plane. User control plane VMs run in the admin cluster. You can choose to create one or three user control planes. If you choose to create three user control planes, GKE On-Prem creates three VMs—one for each control plane—with these specifications.

User clusters

Name System prefix Configuration field Specifications Purpose
User cluster worker nodes [USER_CLUSTER_NAME]-user usercluster.workernode

These are the default values for user cluster worker nodes:

  • 4 vCPU
  • 8192 MB RAM
  • 40 GB hard disk space

A user cluster "node" (also called a "machine") is a virtual machine where workloads run. When you create a user cluster, you decide how many nodes it should run. The configuration required for each node depends on the workloads you run.

For information on the maximum number of clusters and nodes you can create, see Quotas and limits.

You can add or remove VMs from an existing user cluster. See Resizing a Cluster.

PersistentVolumeClaims in admin and user clusters

Your vSphere datastore must have room to fulfill PersistentVolumeClaims (PVCs) made by Prometheus and Google Cloud Observability in the admin and user clusters. For example, Prometheus needs enough storage for a few days of metrics, and Google Cloud Observability needs storage to buffer logs during a network outage.

Each cluster has the following PVCs:

  • Local Prometheus statefulset: 250 GiB * 2 replicas = 500 GiB
  • Stackdriver Prometheus statefulset: 250 GiB * 1 replicas = 250 GiB
  • Log Aggregator statefulset: 100 GiB * 2 replicas = 200 GiB

Network Time Protocol

All VMs in your vSphere environment must use the same Network Time Protocol (NTP) server.

If your admin workstation and cluster nodes use static IP addresses, you must specify the IP address of an NTP server in the tod field of your hostconfig file.

If your admin workstation and cluster nodes get their IP addresses from a DHCP server, you can configure the DHCP server to provide the address of an NTP server. If DHCP does not specify an NTP server, GKE On-Prem uses ntp.ubuntu.com by default.

Load balancer requirements

GKE On-Prem has two load balancing modes: Integrated and Manual. Integrated mode supports F5 BIG-IP. Manual mode supports the following load balancers:

  • Citrix
  • Seesaw

Regardless of which load balancer you choose, your environment should meet the following networking requirements.

Setting aside VIPs for load balancing

You need to set aside several VIPs that you intend to use for load balancing. Set aside a VIP for each of the following purposes:

Component Description
Admin control plane

Accesses admin cluster's Kubernetes API server. The admin cluster's kubeconfig file references this VIP in its server field.

Admin control plane add-ons

Manages communication between the admin control plane add-on VMs.

Admin control plane ingress

Used to interact with Services running within the admin cluster. Istio manages the ingress into the cluster, and the Services need to be explicitly configured for external access.

A VIP for each user cluster control plane

Provides access to each user clusters' Kubernetes API servers. Each user cluster's kubeconfig file references one of these VIPs.

A VIP for each user cluster ingress controller

Used to interact with Services running in the user clusters.

A VIP for cluster upgrades

Used by GKE On-Prem during cluster upgrades.

Setting aside CIDR blocks

You need to set aside CIDR blocks for the following ranges:

Range

Description

Pod IP range

  • CIDR IPv4 block (/C) set aside for Kubernetes Pod IPs. From this range, smaller /24 ranges are assigned per node. If you need an N node cluster, ensure /C is large enough to support N /24 blocks.
  • You need to set aside one Pod IP range for your admin cluster, and one Pod IP range for each user cluster that you intend to create.
  • For example, if the cluster can grow up to 10 nodes, then /C needs to be a /20 block as it supports up to 16 /24 ranges under it.

Service IP range

  • CIDR IPv4 block (/S) set aside for Kubernetes Service IPs. The size of the block determines the number of services. One Service IP is needed for the Ingress controller itself, and 10 or more IPs for Kubernetes Services like cluster DNS, etc.
  • You need to set aside one Service IP range for the admin cluster and one Service IP range for each user cluster that you intend to create. You should not use the same Service IP range for the admin cluster and user clusters.
  • Use a /24 (256 IPs) or larger block for this. Ranges smaller than /24 can limit the number of services that can be hosted per cluster.

Open your firewall for cluster lifecycle events

Be sure your firewall allows the following traffic. Unless otherwise stated, the default protocol is TCP:

From

To

Lifecycle events

Admin control plane, user control planes, user cluster nodes

Outbound access to *.googleapis.com, which resolves to Google netblock IP ranges (port 443) for access to Google Cloud Storage and Container Registry.

  • Cluster Bootstrapping
  • Adding or repairing cluster machines

Admin and user control plane VMs

vCenter IP (port 443)

Add new machines

Admin workstation VM

vCenter IP (port 443)

Cluster bootstrapping

Admin workstation VM

SSH (port 22) to admin and user control planes

  • Cluster bootstrapping
  • Control plane upgrades

Requirements and constraints for Service and Pod connectivity

Consider the following connectivity areas while preparing for GKE On-Prem:

Category

Requirement/Constraint

Routing

You need to make sure that vSphere VMs are able to route to each other.

IP ranges

For the admin cluster, and for each user cluster you intend to create, you need to set aside two distinct CIDR IPv4 blocks: one for Pod IPs, and one for Service IPs.

SSL

N/A

Encapsulation

CNI provider to be used is Calico.

Requirements and constraints for clusters

Consider the following areas while planning your GKE On-Prem clusters:

Category

Requirement/Constraint

Routing

  • F5 BIG-IP only: API access to F5 BIG-IP control plane that supports using virtual IPs to load balance across vSphere VMs.
  • You need to configure your network to route the virtual IPs that will be configured on the load balancer.
  • You need to configure your load balancer to be able to route to cluster nodes. For F5 BIG-IP, NodePort Services in clusters are exposed as L4 VIPs. For other load balancers, you can manually configure NodePort Services to be exposed as L4 VIPs.

Service discoverability

For each user cluster using ingress with HTTPS, you add DNS entries that points to the VIP for the cluster's ingress controller.

F5 BIG-IP requirements

F5 BIG-IP integrates with GKE On-Prem, making it the recommended choice Learn how to install F5 BIG-IP.

To configure the F5 BIG-IP load balancer, follow the instructions below:

F5 BIG-IP user account permissions

GKE On-Prem requires an F5 BIG-IP user account with the Administrator role. See F5's User roles documentation.

During installation, you provide GKE On-Prem with your F5 BIG-IP credentials. The account you provide must have the Administrator role per the F5 Container Connector User Guide, which states, "The BIG-IP Controller requires Administrator permissions in order to provide full functionality."

F5 BIG-IP partitions

You need to create an F5 BIG-IP administrative partition for each cluster that you intend to create ahead of time. Initially, you need to create at least two partitions: one for the admin cluster, and one for a user cluster. See F5's Configuring Administrative Partitions to Control User Access documentation.

Do not use an administrative partition for anything other than its associated cluster.