This page shows how to create a user cluster.
The steps in this topic are written with the assumption that you
used gkeadm
to create your admin workstation.
If you did not use gkeadm
, but instead followed the steps in
Creating an admin workstation with a static IP address,
then you need to make some adjustments as you go through this topic. The
individual steps explain any needed adjustments.
If you are behind a proxy, all gkectl
commands automatically use the same
proxy that you set in your
configuration file for internet
requests from the admin workstation. If your admin workstation is not located
behind the same proxy, refer to the "Manual proxy options" in the advanced
Creating an admin workstation topics:
Static IP
| DHCP.
SSH into your admin workstation
SSH into your admin workstation by following the instructions in Getting an SSH connection to your admin workstation.
Your allowlisted service account is activated on your admin workstation. Do all the remaining steps in this topic on your admin workstation.
Configuring static IPs for your user cluster
To specify the static IP addresses that you want to use for your user cluster,
create a host configuration file named user-hostconfig.yaml
. For this
exercise, you need to specify three IP addresses to be used by the user
cluster.
The following is an example of a host configuration file with three hosts:
hostconfig: dns: 172.16.255.1 tod: 216.239.35.0 otherdns: - 8.8.8.8 - 8.8.4.4 othertod: - ntp.ubuntu.com searchdomainsfordns: - "my.local.com" blocks: - netmask: 255.255.252.0 gateway: 172.16.23.254 ips: - ip: 172.16.20.15 hostname: user-host1 - ip: 172.16.20.16 hostname: user-host2 - ip: 172.16.20.17 hostname: user-host3
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that GKE on-prem will assign to
your user cluster nodes.
In the host configuration file, you also specify the addresses of the DNS servers, time servers, and default gateway that the user cluster nodes will use.
The searchdomainsfordns
field is an array of DNS search domains to use in
the cluster. These domains are used as part of a domain search list.
Populated fields in your configuration file
Recall that when you created your admin workstation, you filled in a
configuration file named admin-ws-config.yaml
. The gkeadm
command-line
tool used your admin-ws-config.yaml
file to create the admin workstation.
When gkeadm
created your admin workstation, it generated another
configuration file nameduser-cluster.yaml
. This configuration file, which
is on your admin workstation, is for creating your user cluster.
The admin-ws-config.yaml
and user-cluster.yaml
files have several fields
in common. The values for those common fields are already populated in your
user-cluster.yaml
file.
These are the fields that are already populated with values that you provided when you created your admin workstation:
stackdriver: projectID: serviceAccountKeyPath: gkeConnect: projectID: registerServiceAccountKeyPath: agentServiceAccountKeyPath:
Filling in the rest of your configuration file
Next you need to fill in the remaining fields in your user-cluster.yaml
file.
name
String. A name of your choice for your user cluster. For example:
name: "my-user-cluster"
gkeOnPremVersion
String. The GKE on-prem version for your user cluster. For example:
gkeOnPremVersion: "1.4.3-gke.3"
network.ipMode.type
String. Set this to "static"
.
network: ipMode: type: "static"
network.ipBlockFilePath
String. Because you are using static IP addresses, you must have a
host configuration file as described in
Configuring static IPs. Set
network.ipBlockFilePath
to the path of your host configuration file. For
example:
network: ipBlockFilePath: "/my-config-directory/admin-hostconfig.yaml"
network.serviceCIDR
and network.podCiDR
Strings. The user cluster must have a range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the network.serviceCIDR
and network.podCIDR
fields. These fields are populated with default values. If you
like, you can change the populated values to values of your choice.
The Service and Pod ranges must not overlap. Also, the Service and Pod ranges must not overlap with IP addresses that are used for nodes in any cluster.
Example:
network: ... serviceCIDR: "10.96.232.0/24" podCIDR: "192.168.0.0/16"
network.vCenter.networkName
String. The name of the vSphere network for your cluster nodes.
If the name contains a special character, you must use an escape sequence for it.
Special characters | Escape sequence |
---|---|
Slash (/ ) |
%2f |
Backslash (\ ) |
%5c |
Percent sign (% ) |
%25 |
If the network name is not unique, it is possible to specify a path to the
network, such as /DATACENTER/network/NETWORK_NAME
.
For example:
network: vCenter: networkName: "MY-CLUSTER-NETWORK"
loadBalancer.vips
Strings. Set the value of loadBalancer.vips.controlPlaneVIP
to The
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the user cluster. Set the value of
loadBalancer.vips.ingressVIP
to the IP address you have chosen to configure
on the load balancer for the ingress service in your user cluster. For example:
loadBalancer: ... vips: controlplaneVIP: "203.0.113.5" ingressVIP: "203.0.113.6"
loadBalancer.kind
String. Set this to "Seesaw"
. For example:
loadBalancer: kind: "Seesaw"
loadBalancer.seesaw.ipBlockFilePath
String. Set this to the path of the hostconfig file for your Seesaw VM. For example:
loadbalancer: ... seesaw: ipBlockFilePath: "user-seesaw-hostconfig.yaml"
loadBalancer.seesaw.vird
Integer. The virtual router identifier of your Seesaw VM. This identifier must be unique in a VLAN. Valid range is 1-255. For example:
loadBalancer: seesaw: vrid: 126
loadBalancer.seesaw.masterIP
String. The control plane IP address for the Seesaw VM. For example:
loadBalancer: seesaw: masterIP: "203.0.113.7"
loadBalancer.seesaw.cpus
Integer. The number of CPUs for your Seesaw VM. For example:
loadBalancer: seesaw: cpus: 8
loadBalancer.seesaw.memoryMB
Integer. The number of megabytes of memory for your Seesaw VM. For example:
loadBalancer:. seesaw: memoryMB: 8192
loadBalancer.seesaw.enableHA
Boolean. Set this to false
. For example:
loadBalancer:. seesaw: enableHA: false
nodePools.name
String. A name of your choice for a node pool. For example:
nodePools: - name: "my-user-pool"
nodePools.replicas
Integer. The number of VMs in your node pool. Set this to 3
.
nodePools: - name: "my-user-pool" replicas: 3
stackdriver.clusterLocation
String. The Google Cloud region where you want to store logs. It is a good idea to choose a region that is near your on-prem data center.
stackdriver.enablevpc
Set stackdriver.enablevpc
to true
if you have your cluster's network
controlled by a VPC. This ensures that all
telemetry flows through Google's restricted IP addresses.
stackdriver.enableVPC
Boolean. Set stackdriver.enableVPC
to true
if you have your cluster's network
controlled by a VPC. This ensures that all
telemetry flows through Google's restricted IP addresses. Otherwise, set
this to false
. For example:
stackdriver: enableVPC: false
Additional fields in the configuration file
The GKE on-prem
configuration file
has several fields in addition to the ones shown in this topic. For example,
you can use the manuallbspec
field to configure GKE on-prem to run
in manual load balancing mode.
For a complete description of the fields in the configuration file, see Installing using DHCP and Installing using static IP addresses.
Validating the configuration file
After you've modified the configuration file, run gkectl check-config
to
verify that the file is valid and can be used for installation:
gkectl check-config --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] --config [CONFIG_FILE]
where:
[ADMIN_CLUSTER_KUBECONFIG] is the path of the kubeconfig file for your admin cluster.
[CONFIG_FILE] is the path of your user cluster configuration file.
If the command returns any FAILURE
messages, fix the issues and validate the
file again.
If you want to skip the more time-consuming validations, pass the --fast
flag.
To skip individual validations, use the --skip-validation-xxx
flags. To
learn more about the check-config
command, see
Running preflight checks.
Creating your load balancer
Create and configure the VM for your Seesaw load balancer:
gkectl create loadbalancer --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] --config [CONFIG_FILE]
where:
[ADMIN_CLUSTER_KUBECONFIG] is the path of the kubeconfig file for your admin cluster.
[CONFIG_FILE] is the path of your user cluster configuration file.
Creating the user cluster
Create your user cluster::
gkectl create cluster --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] --config [CONFIG_FILE]
where:
[ADMIN_CLUSTER_KUBECONFIG] is the path of the kubeconfig file for your admin cluster.
[CONFIG_FILE] is the path of your user cluster configuration file.