Creating an admin cluster

This page shows how to create an admin cluster for GKE on VMware.

The instructions here are complete. For a shorter introduction to creating an admin cluster, see Create an admin cluster (quickstart).

Before you begin

Create an admin workstation.

Get an SSH connection to your admin workstation

Get an SSH connection to your admin workstation.

Recall that gkeadm activated your component access service account on the admin workstation.

Do all the remaining steps in this topic on your admin workstation in the home directory.

Credentials configuration file

When you used gkeadm to create your admin workstation, you filled in a credentials configuration file named credential.yaml. This file holds the username and password for your vCenter server.

Admin cluster configuration file

When gkeadm created your admin workstation, it generated a configuration file named admin-cluster.yaml. This configuration file is for creating your admin cluster.

Filling in your configuration file


This field is already filled in for you.


Most of the fields in this are already filled in with values that you entered when you created your admin workstation. The exception is the dataDisk field, which you must fill in now.


Decide how you want your cluster nodes to get their IP addresses. The options are:

  • From a DHCP server. Set network.ipMode.type to "dhcp".

  • From a list of static IP addresses that you provide. Set network.ipMode.type to "static", and create an IP block file that provides the static IP addresses.

Provide values for the remaining fields in the network section.

Regardless of whether you rely on a DHCP server or specify a list of static IP addresses, you need enough IP addresses to satisfy the following:

  • Three nodes in the admin cluster to run the admin cluster control plane and add-ons.

  • An additional node in the admin cluster to be used temporarily during upgrades.

  • For each user cluster that you intend to create, one or three nodes in the admin cluster to run the control-plane components for the user cluster. If you want the control plane for a user cluster to be highly available (HA), then you need three nodes in the admin cluster for the user cluster control plane. Otherwise, you need only one node in the admin cluster for the user cluster control plane.

For example, suppose you intend to create two user clusters: one with an HA control plane and one with a non-HA control plane. Then you would need eight IP addresses for the following nodes in the admin cluster:

  • Three nodes for the admin cluster control plane and add-ons
  • One temporary node
  • Three nodes for the HA user-cluster control plane
  • One node for the non-HA user-cluster control plane

As mentioned previously, if you want to use static IP addresses, then you need to provide an IP block file. Here is an example of an IP block file with eight hosts:

  - netmask:
    - ip:
      hostname: admin-host1
    - ip:
      hostname: admin-host2
    - ip:
      hostname: admin-host3
    - ip:
      hostname: admin-host4
    - ip:
      hostname: admin-host5
    - ip:
      hostname: admin-host6
    - ip:
      hostname: admin-host7
    - ip:
      hostname: admin-host8


Set aside a VIP for the Kubernetes API server of your admin cluster. Set aside another VIP for the add-ons server. Provide your VIPs as values for loadBalancer.vips.controlPlaneVIP and loadBalancer.vips.addonsVIP.

Decide what type of load balancing you want to use. The options are:

  • Seesaw bundled load balancing. Set loadBalancer.kind to "Seesaw", and fill in the loadBalancer.seesaw section.

  • Integrated load balancing with F5 BIG-IP. Set loadBalancer.kind to "F5BigIP", and fill in the f5BigIP section.

  • Manual load balancing. Set loadBalancer.kind to "ManualLB", and fill in the manualLB section.


Set antiAffinityGroups.enabled to true or false according to your preference.


If the network that will have your admin cluster nodes is behind a proxy server, fill in the proxy section.


Decide where you want to keep container images for the GKE on VMware components. The options are:

  • Do not fill in the privateRegistry section.

  • Your own private Docker registry. Fill in the privateRegistry section.


Set gcrKeyPath to the path of the JSON key file for your component access service account.


Fill in the stackdriver section.


If you want Kubernetes audit logs to be integrated with Cloud Audit Logs, fill in the cloudAuditLogging section.


If you want to enable node auto repair, set autoRepair.enabled to true. Otherwise, set it to false.

Validating your configuration file

After you've filled in your admin cluster configuration file, run gkectl check-config to verify that the file is valid:

gkectl check-config --config [CONFIG_PATH]

where [CONFIG_PATH] is the path of your admin cluster configuration file.

If the command returns any failure messages, fix the issues and validate the file again.

If you want to skip the more time-consuming validations, pass the --fast flag. To skip individual validations, use the --skip-validation-xxx flags. To learn more about the check-config command, see Running preflight checks.

Running gkectl prepare

Run gkectl prepare to initialize your vSphere environment:

gkectl prepare --config [CONFIG_PATH]

The gkectl prepare command performs the following preparatory tasks:

  • Imports the OS images to vSphere and marks them as VM templates.

  • If you are using a private Docker registry, this command pushes the Docker container images to your registry.

  • Optionally, this command validates the container images' build attestations, thereby verifying the images were built and signed by Google and are ready for deployment.

Creating a Seesaw load balancer for your admin cluster

If you have chosen to use the bundled Seesaw load balancer, do the step in this section. Otherwise, you can skip this section.

Create and configure the VMs for your Seesaw load balancer:

gkectl create loadbalancer --config [CONFIG_PATH]

Creating the admin cluster

Create the admin cluster:

gkectl create admin --config [CONFIG_PATH]

where [CONFIG_PATH] is the path of your admin cluster configuration file.

The gkectl create admin command creates a kubeconfig file named kubeconfig in the current directory. You will need this kubeconfig file later to interact with your admin cluster.

Verifying that your admin cluster is running

Verify that your admin cluster is running:

kubectl get nodes --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]

where [ADMIN_CLUSTER_KUBECONFIG] is the path of your kubeconfig file.

The output shows the admin cluster nodes.


See Troubleshooting cluster creation and upgrade

What's next

Creating a user cluster