This page explains how to create additional user clusters. To create additional user clusters, you make a copy of the GKE on-prem configuration file used to deploy your clusters. You modify the copied file to meet your expectations for the new user clusters, and then you use the file to create the cluster.
You need to copy and modify a GKE on-prem configuration file for each additional user cluster you want to create.
Before you begin
- Be sure that an admin cluster is running. You created an admin cluster when you installed GKE on-prem.
- Locate the
config.yamlfile that was generated by
gkectlduring installation. This file defines specifications for the admin cluster and user clusters. You'll copy and modify this file to create an additional user clusters.
- Locate the admin cluster's
kubeconfigfile. You reference this file when you copy and modify
|Maximum and minimum limits for clusters and nodes||
See Quotas and limits. Your environment's performance might impact these limits.
|Uniqueness for user cluster names||
All user clusters registered to the same Google Cloud project must have unique names.
|Cannot deploy to more than one vCenter and/or vSphere datacenter||
Currently, you can only deploy an admin cluster and a set of associated user clusters to a single vCenter and/or vSphere datacenter. You cannot deploy the same admin and user clusters to more than one vCenter and/or vSphere datacenter.
|Cannot declaratively change cluster configurations after creation||While you can create additional clusters and resize existing clusters, you cannot change an existing cluster through its configuration file.|
Verify that enough IP addresses are available
Be sure that you have enough IP addresses allocated for the new user cluster. Verifying that you have enough IP addresses depends on whether you're using a DHCP server or static IPs.
In addition, be sure that you have enough IP addresses allocated for your admin cluster. The admin cluster has one or three control-plane nodes for each user cluster, and the admin cluster will need one or three additional control-plane nodes for the user cluster you want to create. There must be enough IP addresses available for all of those control-plane nodes. To upgrade your admin cluster to have more IP addresses, see Upgrading--verify that enough IP addresses are available.
Check that the DHCP server in the network in which the cluster will be created has enough IP addresses. There should be more IP addresses than there will be nodes running in the user cluster.
Check that you've allocated enough IP addresses on your load balancer, and be sure to specify these IP addresses during cluster creation.
Copy configuration file
cp [CONFIG_FILE] [NEW_USER_CLUSTER_CONFIG]
where [NEW_USER_CLUSTER_CONFIG] is the name you choose for the copy
of the configuration file. For the purpose of these instructions, we'll call
create-user-cluster.yaml, you need to change the following fields:
admincluster, the specification for the admin cluster. You completely remove the
adminclusterspecification from the file.
usercluster, the specification for a user cluster.
In the following sections, you modify the
create-user-cluster.yaml, then use the file to create additional
If you want to create additonal user clusters from the existing admin cluster,
you need to delete the entire
To do so, simply delete the specification and all of its subfields.
Be sure not to delete the
usercluster specification, nor any of its subfields.
Make changes to the
usercluster fields as described in the following sections.
Change the user cluster's name
Change the user cluster name from the
usercluster.clustername field. New user
clusters should have names different from existing user clusters.
Reserve IP addresses for the user cluster's nodes
If you're using DHCP, make sure that you have enough IPs for the nodes to be created.
For static IP, you should modify the file provided to
usercluster.ipblockfilepath that contains the predefined IP addresses for the
user cluster, or provide a different static IP YAML file with the IPs you want.
Reserve IP addresses for the load balancer
If you're using F5 BIG-IP load balancer, be sure to reserve two IP addresses for
the user cluster's load balancer control plane and ingress. The corresponding
Change the machine requirements (optional)
If you need this user cluster's control plane or worker nodes to use a different amount of CPU or memory, or if you need the cluster to run additional or fewer nodes, set values for the following fields:
usercluster.masternode.cpus: Number of CPU cores to use.
usercluster.masternode.memorymb: Number of MB of memory to use
usercluster.masternode.replicas: Number of nodes of this type to run. Value must be
usercluster.workernode.cpus: Number of CPU cores to use.
usercluster.workernode.memorymd: Number of MB of memory to use.
usercluster.workernode.replicas: Number of nodes of this type to run.
If you want to change certain aspects of your vSphere environment for your new
cluster, you can modify any of the following fields under
Do not modify the following fields:
Create the user cluster
Now that you've populated a
create-user-cluster.yaml file, you're ready to use
that file to create an additional user cluster.
Run the following command:
gkectl create cluster --config create-user-cluster.yaml --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]
- create-user-cluster.yaml is the configuration file you just created. You might have chosen a different name for this file.
- [ADMIN_CLUSTER_KUBECONFIG] points to the existing admin cluster's
Version 1.1: Creating a second user cluster fails when using a vSAN datastore
Refer to GKE on-prem release notes.
For more information, refer to Troubleshoooting.
Diagnosing cluster issues using
gkectl diagnosecommands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Default logging behavior
gkeadm it is sufficient to use the
default logging settings:
By default, log entries are saved as follows:
gkectl, the default log file is
/home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log, and the file is symlinked with the
logs/gkectl-$(date).logfile in the local directory where you run
gkeadm, the default log file is
logs/gkeadm-$(date).login the local directory where you run
- All log entries are saved in the log file, even if they are not printed in
the terminal (when
-v5verbosity level (default) covers all the log entries needed by the support team.
- The log file also contains the command executed and the failure message.
We recommend that you send the log file to the support team when you need help.
Specifying a non-default location for the log file
To specify a non-default location for the
gkectl log file, use
--log_file flag. The log file that you specify will not be
symlinked with the local directory.
To specify a non-default location for the
gkeadm log file, use
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-systemnamespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grepor a similar tool to search for errors:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager