Creating an admin cluster (basic)

This page shows how to create an admin cluster.

SSH into your admin workstation

In the previous topic, you used gkeadm to create an admin workstation. Recall that gkeadm activated your component access service account on the admin workstation.

SSH into your admin workstation by following the instructions in Getting an SSH connection to your admin workstation.

Do all the remaining steps in this topic on your admin workstation.

Configuring static IPs for your admin cluster

To specify the static IP addresses that you want to use for your admin cluster, create an IP block file named admin-cluster-ipblock.yaml. For this exercise, you need to specify five IP addresses to be used by the admin cluster.

The following is an example of an IP block file with five hosts:

blocks:
  - netmask: 255.255.252.0
    gateway: 172.16.23.254
    ips:
    - ip: 172.16.20.10
      hostname: admin-host1
    - ip: 172.16.20.11
      hostname: admin-host2
    - ip: 172.16.20.12
      hostname: admin-host3
    - ip: 172.16.20.13
      hostname: admin-host4
    - ip: 172.16.20.14
      hostname: admin-host5

The ips field is an array of IP addresses and hostnames. These are the IP addresses and hostnames that GKE on-prem will assign to your admin cluster nodes.

In the IP block file, you also specify a subnet mask and a default gateway for the admin cluster nodes.

Creating a credentials configuration file

Create a credentials configuration file named admin-creds.yaml that holds the username and password of your vCenter user account. The user account should have the Administrator role or equivalent privileges.

Here's an example of a credentials configuration file:

apiVersion: v1
kind: "CredentialFile"
items:
- name: "vcenter-creds"
  username: "my-vcenter-account"
  password: "U$icUKEW#INE"

Populated fields in your GKE on-prem configuration file

Recall that when you created your admin workstation, you filled in a configuration file named admin-ws-config.yaml. The gkeadm command-line tool used your admin-ws-config.yaml file to create the admin workstation.

When gkeadm created your admin workstation, it generated another configuration file named admin-cluster.yaml. This configuration file, which is on your admin workstation, is for creating your admin cluster.

The admin-cluster.yaml file has several fields that are the same as, or strongly related to, certain fields in the admin-ws-config.yaml file. The values for those fields are already populated in your admin-cluster.yaml file.

These are the fields in admin-cluster.yaml that are already populated according to values that you entered in your admin-ws-config.yaml file:

vCenter:
  address:
  datacenter:
  cluster:
  network:
    vCenter:
      networkName:
  resourcePool:
  datastore:
  caCertPath:
proxy:
  url:
gcrKeyPath:

Several other fields in admin-cluster.yaml are populated with default or generated values. For example:

bundlePath:
loadbalancer
  seesaw:
    cpus:
    memoryMB:
    enableHA:
stackdriver:
  projectID:
  serviceAccountKeyPath:

In your admin-cluster.yaml file, leave all of the populated values unchanged.

Filling in the rest of your admin-cluster configuration file

Next you need to fill in the remaining fields in your admin-cluster.yaml file.

vCenter.credentials.fileRef.path

String. The path of your credentials configuration file. For example:

vCenter:
  credentials:
    fileRef:
      path: "admin-creds.yaml"

vCenter.credentials.fileRef.entry

String. The name of the credentials block, in your credentials configuration file, that holds the username and password of your vCenter user account. For example:

vCenter:
  credentials:
    fileRef:
      entry: "vcenter-creds"

vCenter.dataDisk

String. GKE on-prem creates a virtual machine disk (VMDK) to hold the Kubernetes object data for the admin cluster. The installer creates the VMDK for you, but you must provide a name for the VMDK in the vCenter.dataDisk field. For example:

vCenter:
  dataDisk: "my-disk.vmdk"
vSAN datastore: Creating a folder for the VMDK

If you are using a vSAN datastore, you need to put the VMDK in a folder. You must manually create the folder ahead of time. To do so, you could use govc to create a folder:

govc datastore.mkdir -namespace=true my-gke-on-prem-folder

Then set vCenter.dataDisk to the path of the VMDK, including the folder. For example:

vDenter:
dataDisk: "my-gke-on-prem-folder/my-disk.vmdk"

network.ipMode.type

String. Set this to "static".

network:
  ipMode:
    type: "static"

network.ipBlockFilePath

String. Because you are using static IP addresses, you must have a IP block file as described in Configuring static IPs. Set network.ipBlockFilePath to the path of your IP block file. For example:

network:
  ipBlockFilePath: "/my-config-directory/admin-cluster-ipblock.yaml"

network.serviceCIDR and network.podCiDR

Strings. The admin cluster must have a range of IP addresses to use for Services and a range of IP addresses to use for Pods. These ranges are specified by the network.serviceCIDR and network.podCIDR fields. These fields are populated with default values. If you like, you can change the populated values to values of your choice.

The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.

Example:

network:
  serviceCIDR: "10.96.232.0/24"
  podCIDR: "192.168.0.0/16"

loadBalancer.vips

Strings. Set the value of loadBalancer.vips.controlPlaneVIP to The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the admin cluster. Set the value of loadBalancer.vips.addonsVIP to the IP address you have chosen to configure on the load balancer for add-ons. For example:

loadBalancer:
  vips:
    controlplaneVIP: "203.0.113.3"
    addonsVIP: "203.0.113.4"

loadBalancer.kind

String. Set this to "Seesaw". For example:

loadBalancer:
  kind: "Seesaw"

loadBalancer.seesaw.ipBlockFilePath

String. Set this to the path of the IP block file for your Seesaw VM.

For example:

loadbalancer:
  seesaw:
    ipBlockFilePath: "admin-seesaw-ipblock.yaml"

loadBalancer.seesaw.vird

Integer. The virtual router identifier of your Seesaw VM. This identifier must be unique in a VLAN. Valid range is 1-255. For example:

loadBalancer:
  seesaw:
    vrid: 125

loadBalancer.seesaw.masterIP

String. An IP address, of your choice, that your Seesaw VM will announce. For example:

loadBalancer:
  seesaw:
    masterIP: "172.16.20.21"

loadBalancer.seesaw.enableHA

Boolean. Set this to false. For example:

loadBalancer:.
  seesaw:
    enableHA: false

proxy.url

String. If you entered a value for proxyURL in your admin workstation configuration file, this field is already populated with that same value.

If you intend for your admin and user clusters to be behind a different proxy from your admin workstation, then set this to HTTP address of the proxy server that you want your clusters to use. You must include the port number even if it's the same as the scheme's default port. For example:

proxy:
  url: "http://my-proxy.example.local:80"

proxy.noProxy

String. A list of IP addresses, IP address ranges, host names, and domain names that should not go through the proxy server. When GKE on-prem sends a request to one of these addresses, hosts, or domains, that request is sent directly. For example:

proxy:
  noProxy: "10.151.222.0/24, my-host.example.local,10.151.2.1"

stackdriver.clusterLocation

String. The Google Cloud region where you want to store logs. It is a good idea to choose a region that is near your on-prem data center. For example:

stackdriver:
  clusterLocation: "us-central1"

stackdriver.enableVPC

Boolean. Set stackdriver.enableVPC to true if you have your cluster's network controlled by a VPC. This ensures that all telemetry flows through Google's restricted IP addresses. Otherwise, set this to false. For example:

stackdriver:
  enableVPC: false

Additional fields in the admin cluster configuration file

The admin cluster configuration file has several fields in addition to the ones shown in this topic. For a complete description of the fields in the configuration file, see Admin cluster configuration file.

Validating the admin cluster configuration file

After you've modified the admin cluster configuration file, run gkectl check-config to verify that the file is valid and can be used for cluster creation:

gkectl check-config --config admin-cluster.yaml

If the command returns any FAILURE messages, you must first fix the issues and validate the file again.

If you want to skip the more time-consuming validations, pass the --fast flag. To skip individual validations, use the --skip-validation-xxx flags. To learn more about the check-config command, see Running preflight checks.

Running gkectl prepare

Run gkectl prepare to initialize your vSphere environment:

gkectl prepare --config admin-cluster.yaml

Creating your load balancer

Create and configure the VM for your Seesaw load balancer:

gkectl create loadbalancer --config admin-cluster.yaml

Creating the admin cluster

Create your admin cluster:

gkectl create admin --config admin-cluster.yaml