Learn how to create an admin cluster and user cluster with basic configuration.
The steps in this topic assume that you used gkeadm
to create your admin
workstation.
If you did not use gkeadm
and instead followed the advanced topic
Creating an admin workstation with a static IP
address, you
might need to make adjustments as explained in each individual step.
For a DHCP admin
workstation, you must use the corresponding
DHCP installation guide.
If you are behind a proxy, all gkectl
commands automatically use the same
proxy that you set in your
configuration file for internet
requests from the admin workstation. If your admin workstation is not located
behind the same proxy, refer to the "Manual proxy options" in the advanced
Creating an admin workstation topics:
Static IP
| DHCP.
SSH into your admin workstation
SSH into your admin workstation by following the instructions in Getting an SSH connection to your admin workstation.
Your allowlisted service account is activated on your admin workstation. Do all the remaining steps in this topic on your admin workstation.
Configuring static IPs for your admin cluster
To specify the static IP addresses that you want to use for your admin cluster,
create a host configuration file named admin-hostconfig.yaml
. For this
exercise, you need to specify five IP addresses to be used by the admin
cluster.
The following is an example of a host configuration file with five hosts:
hostconfig: dns: 172.16.255.1 tod: 216.239.35.0 otherdns: - 8.8.8.8 - 8.8.4.4 othertod: - ntp.ubuntu.com searchdomainsfordns: - "my.local.com" blocks: - netmask: 255.255.252.0 gateway: 172.16.23.254 ips: - ip: 172.16.20.10 hostname: admin-host1 - ip: 172.16.20.11 hostname: admin-host2 - ip: 172.16.20.12 hostname: admin-host3 - ip: 172.16.20.13 hostname: admin-host4 - ip: 172.16.20.14 hostname: admin-host5
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that GKE on-prem will assign to
your admin cluster nodes.
In the host configuration file, you also specify the addresses of the DNS servers, time servers, and default gateway that the admin cluster nodes will use.
The searchdomainsfordns
field is an array of DNS search domains to use in
the cluster. These domains are used as part of a domain search list.
Configuring static IPs for your user cluster
To specify the static IP addresses that you want to use for your user cluster,
create a host configuration file named user-hostconfig.yaml
.
The following is an example of a host configuration file with three hosts:
hostconfig: dns: 172.16.255.1 tod: 216.239.35.0 otherdns: - 8.8.8.8 - 8.8.4.4 othertod: - ntp.ubuntu.com searchdomainsfordns: - "my.local.com" blocks: - netmask: 255.255.252.0 gateway: 172.16.23.254 ips: - ip: 172.16.20.15 hostname: user-host1 - ip: 172.16.20.16 hostname: user-host2 - ip: 172.16.20.17 hostname: user-host3
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that GKE on-prem will assign to your user
cluster nodes.
In the host configuration file, you also specify the addresses of the DNS servers, time servers, and default gateway that the user cluster nodes will use.
The searchdomainsfordns
field is an array of DNS search domains to use in
the cluster. These domains are used as part of a domain search list.
Populated fields in your GKE on-prem configuration file
Recall that when you created your admin workstation, you filled in a
configuration file named admin-ws-config.yaml
. The gkeadm
command-line
tool used your admin-ws-config.yaml
file to create the admin workstation.
When gkeadm
created your admin workstation, it generated a second
configuration file namedconfig.yaml
. This configuration file, which is on your
admin workstation, is for creating GKE on-prem clusters.
The admin-ws-config.yaml
and config.yaml
files have several fields in
common. The values for those common fields are already populated in your
config.yaml
file.
These are the fields that are already populated with values that you provided when you created your admin workstation:
vcenter: credentials: address: username: password: datacenter: datastore: cluster: network: resourcepool: cacertpath: gkeconnect: projectid: registerserviceaccountkeypath: agentserviceaccountkeypath: stackdriver: projectid: serviceaccountkeypath: gcrkeypath:
Filling in the rest of your GKE on-prem configuration file
Next you need to fill in the remaining fields in your config.yaml
file.
bundlepath
The GKE on-prem bundle file contains all of the components in a
particular release of GKE on-prem. Set the value of bundlepath
to
the path of the admin workstation's bundle file.
bundlepath: /var/lib/gke/bundles/gke-onprem-vsphere-1.4.3-gke.3-full.tgz
vcenter.datadisk
GKE on-prem creates a virtual machine disk (VMDK) to hold the
Kubernetes object data for the admin cluster. The installer creates the VMDK for
you, but you must provide a name for the VMDK in the vcenter.datadisk
field.
For example:
vcenter: ... datadisk: "my-disk.vmdk"
- vSAN datastore: Creating a folder for the VMDK
If you are using a vSAN datastore, you need to put the VMDK in a folder. You must manually create the folder ahead of time. To do so, you could use
govc
to create a folder:govc datastore.mkdir -namespace=true my-gke-on-prem-folder
Then set
vcenter.datadisk
to the path of the VMDK, including the folder. For example:vcenter: ... datadisk: "my-gke-on-prem-folder/my-disk.vmdk"
In version 1.1.1, a known issue requires that you provide the folder's universally unique identifier (UUID) instead of its path.
proxy
If your network is behind a proxy server, you must specify the proxy address and any addresses that should not go through the proxy server.
Set proxy.url
to specify the HTTP
address of your proxy server. You must
include the port number even if it's the same as the scheme's default port.
The proxy server you specify here is used by your GKE on-prem
clusters. Also, your admin workstation is automatically configured to use this
same proxy server unless you set the HTTPS_PROXY
environment variable on your
admin workstation.
Set proxy.noproxy
to define a list of IP addresses, IP address ranges,
host names, and domain names that should not go through the proxy server. When
GKE on-prem sends a request to one of these addresses, hosts, or
domains, that request is sent directly.
Example:
proxy: url: "http://my-proxy.example.local:80" noproxy: "10.151.222.0/24, my-host.example.local,10.151.2.1"
admincluster.ipblockfilepath
Because you are using static IP addresses, you must have a
host configuration file as described in
Configuring static IPs. Provide the path to your host
configuration file in the admincluster.ipblockfilepath
field. For example:
admincluster: ipblockfilepath: "/my-config-directory/admin-hostconfig.yaml"
admincluster.bigip.credentials
GKE on-prem needs to know the IP address or hostname, username,
and password of your F5 BIG-IP load balancer. Set the values under
admincluster.bigip
to provide this information. Put an
anchor,
&bigip-credentials
, after credentials
so you don't have to repeat this
information in the usercluster
section. For example:
admincluster: ... bigip: credentials: &bigip-credentials address: "203.0.113.2" username: "my-admin-f5-name" password: "rJDlm^%7aOzw"
admincluster.bigip.partition
Previously, you created a BIG-IP partition for your admin cluster. Set
admincluster.bigip.partition
to the name of your partition. For example:
admincluster: ... bigip: partition: "my-admin-f5-partition"
admincluster.vips
Set the value of admincluster.vips.controlplanevip
to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the admin cluster. Set the value of
ingressvip
to the IP address you have chosen to configure on the load balancer
for the admin cluster's ingress service. For example:
admincluster: ... vips: controlplanevip: 203.0.113.3 ingressvip: 203.0.113.4
admincluster.serviceiprange
and admincluster.podiprange
The admin cluster must have a range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the admincluster.serviceiprange
and admincluster.podiprange
fields. These fields are populated when you run gkectl create-config
. If you
like, you can change the populated values to values of your choice.
The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.
Example:
admincluster: ... serviceiprange: 10.96.232.0/24 podiprange: 192.168.0.0/16
usercluster.ipblockfilepath
Because you are using static IP addresses, you must have a
host configuration file as described in
Configuring static IPs. Provide the path to
your host configuration file in the usercluster.ipblockfilepath
field. For
example:
usercluster: ipblockfilepath: "/my-config-directory/user-hostconfig.yaml"
usercluster.bigip.credentials
Put a
reference,
*bigip-credentials
, after usercluster.bigip.credentials
to use the same
address
, username
, and password
that you specified in
admincluster.bigip.credentials
. For example:
usercluster: ... bigip: credentials: *bigip-credentials
usercluster.bigip.partition
Previously, you created a BIG-IP partition for your user cluster. Set
usercluster.bigip.partition
to the name of your partition. For example:
usercluster: ... bigip: partition: "my-user-f5-partition" ...
usercluster.vips
Set the value of usercluster.vips.controlplanevip
to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the user cluster. Set the value of
ingressvip
to the IP address you have chosen to configure on the load balancer
for the user cluster's ingress service. For example:
usercluster: ... vips: controlplanevip: 203.0.113.6 ingressvip: 203.0.113.7
usercluster.serviceiprange
and usercluster.podiprange
The user cluster must have a range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the usercluster.serviceiprange
and usercluster.podiprange
fields. These fields are populated when you run gkectl create-config
. If you
prefer, you can change the populated values to values of your choice.
The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.
Example:
usercluster: ... serviceiprange: 10.96.233.0/24 podiprange: 172.16.0.0/12
Disabling VMware DRS anti-affinity rules
GKE on-prem automatically creates VMware Distributed Resource Scheduler (DRS) anti-affinity rules for your user cluster's nodes, causing them to be spread across at least three physical hosts in your datacenter.
This feature requires that your vSphere environment meets the following conditions:
- VMware DRS is enabled. VMware DRS requires vSphere Enterprise Plus license edition. To learn how to enable DRS, see Enabling VMware DRS in a cluster.
- The vSphere user account provided in the
vcenter
field has theHost.Inventory.EditCluster
permission. - There are at least three physical hosts available.
Recall that if you have a vSphere Standard license, you cannot enable VMware DRS.
If you do not have DRS enabled, or if you do not have at least three hosts to
which vSphere VMs can be scheduled, add
usercluster.antiaffinitygroups.enabled: false
to your configuration file.
For example:
usercluster: ... antiaffinitygroups: enabled: false
For more information, see the release notes for version 1.1.0-gke.6
lbmode
Set lbmode
to "Integrated"
. For example:
lbmode: "Integrated"
stackdriver.clusterlocation
Set stackdriver.clusterlocation
to a Google Cloud region where you want
to store logs. It is a good idea to choose a region that is near
your on-prem data center.
stackdriver.enablevpc
Set stackdriver.enablevpc
to true
if you have your cluster's network
controlled by a VPC. This ensures that all
telemetry flows through Google's restricted IP addresses.
Additional fields in the configuration file
The GKE on-prem
configuration file
has several fields in addition to the ones shown in this topic. For example,
you can use the manuallbspec
field to configure GKE on-prem to run
in manual load balancing mode.
For a complete description of the fields in the configuration file, see Installing using DHCP and Installing using static IP addresses.
Validating the configuration file
After you've modified the configuration file, run gkectl check-config
to
verify that the file is valid and can be used for installation:
gkectl check-config --config config.yaml
If the command returns any FAILURE
messages, fix the issues and validate the
file again.
If you want to skip the more time-consuming validations, pass the --fast
flag.
To skip individual validations, use the --skip-validation-xxx
flags. To
learn more about the check-config
command, see
Running preflight checks.
Running gkectl prepare
Run gkectl prepare
to initialize your vSphere environment:
gkectl prepare --config config.yaml --skip-validation-all
Creating the admin and user clusters
Create the admin cluster and the user cluster by running the
gkectl create cluster
command.gkectl create cluster --config config.yaml --skip-validation-all
The
gkectl create cluster
command createskubeconfig
files named[CLUSTER_NAME]-kubeconfig
in the current directory where [CLUSTER_NAME] is the name that you set forcluster
. Example:MY-CLUSTER-kubeconfig
The GKE on-prem documentation uses the following placeholders to refer to these
kubeconfig
files:- Admin cluster: [ADMIN_CLUSTER_KUBECONFIG]
- User cluster: [USER_CLUSTER_KUBECONFIG]
Verify that the cluster are created and running:
To verify the admin cluster, run the following command:
kubectl get nodes --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]
The output shows the admin cluster nodes.
To verify the user cluster, run the following command:
kubectl get nodes --kubeconfig [USER_CLUSTER_KUBECONFIG]
The output shows the user cluster nodes. For example:
NAME STATUS ROLES AGE VERSION xxxxxx-1234-ipam-15008527 Ready <none> 12m v1.14.7-gke.24 xxxxxx-1234-ipam-1500852a Ready <none> 12m v1.14.7-gke.24 xxxxxx-1234-ipam-15008536 Ready <none> 12m v1.14.7-gke.24
Continue to the next section to learn how to deploy an application to your user cluster.