This page explains how to create additional user clusters. To create additional user clusters, you make a copy of the GKE On-Prem configuration file used to deploy your clusters. You modify the copied file to meet your expectations for the new user clusters, and then you use the file to create the cluster.
You need to copy and modify a GKE On-Prem configuration file for each additional user cluster you want to create.
Before you begin
- Be sure that an admin cluster is running. You created an admin cluster when you installed GKE On-Prem.
- Locate the
config.yamlfile that was generated by
gkectlduring installation. This file defines specifications for the admin cluster and user clusters. You'll copy and modify this file to create an additional user clusters.
- Locate the admin cluster's
kubeconfigfile. You reference this file when you copy and modify
Copy configuration file
cp [CONFIG_FILE] [NEW_USER_CLUSTER_CONFIG]
where [NEW_USER_CLUSTER_CONFIG] is the name you choose for the copy
of the configuration file. For the purpose of these instructions, we'll call
create-user-cluster.yaml, you need to change the following fields:
admincluster, the specification for the admin cluster. You completely remove the
adminclusterspecification from the file.
usercluster, the specification for a user cluster.
In the following sections, you modify the
create-user-cluster.yaml, then use the file to create additional
In the copied file, you may need to change the following field:
gkeplatformversion, which specifies the Kubernetes version to be run in the clusters. (This is not the GKE On-Prem platform version. In a future release, this field will be renamed.)
If you want to create additonal user clusters from the existing admin cluster,
you need to delete the entire
To do so, simply delete the specification and all of its subfields.
Be sure not to delete the
usercluster specification, nor any of its subfields.
Make changes to the
usercluster fields as described in the following sections.
Change the user cluster's name
Change the user cluster name from the
usercluster.clustername field. New user
clusters should have names different from existing user clusters.
Reserve IP addresses for the user cluster's nodes
If you're using DHCP, make sure that you have enough IPs for the nodes to be created.
For static IP, you should modify the file provided to
usercluster.ipblockfilepath that contains the predefined IP addresses for the
user cluster, or provide a different static IP YAML file with the IPs you want.
Reserve IP addresses for the load balancer
If you're using F5 BIG-IP load balancer, be sure to reserve two IP addresses for
the user cluster's load balancer control plane and ingress. The corresponding
Change the machine requirements (optional)
If you need this user cluster's control plane or worker nodes to use a different amount of CPU or memory, or if you need the cluster to run additional or fewer nodes, set values for the following fields:
usercluster.masternode.cpus: Number of CPU cores to use.
usercluster.masternode.memorymb: Number of MB of memory to use
usercluster.masternode.replicas: Number of nodes of this type to run. Value must be
usercluster.workernode.cpus: Number of CPU cores to use.
usercluster.workernode.memorymd: Number of MB of memory to use.
usercluster.workernode.replicas: Number of nodes of this type to run.
Create the user cluster
Now that you've populated a
create-user-cluster.yaml file, you're ready to use
that file to create an additional user cluster.
Run the following command:
gkectl create cluster --config create-user-cluster.yaml --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]
- create-user-cluster.yaml is the configuration file you just created. You might have chosen a different name for this file.
- [ADMIN_CLUSTER_KUBECONFIG] points to the existing admin cluster's
For more information, refer to Troubleshooting.
Diagnosing cluster issues using
gkectl diagnosecommands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Default logging behavior
gkeadm it is sufficient to use the
default logging settings:
By default, log entries are saved as follows:
gkectl, the default log file is
/home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log, and the file is symlinked with the
logs/gkectl-$(date).logfile in the local directory where you run
gkeadm, the default log file is
logs/gkeadm-$(date).login the local directory where you run
- All log entries are saved in the log file, even if they are not printed in
the terminal (when
-v5verbosity level (default) covers all the log entries needed by the support team.
- The log file also contains the command executed and the failure message.
We recommend that you send the log file to the support team when you need help.
Specifying a non-default location for the log file
To specify a non-default location for the
gkectl log file, use
--log_file flag. The log file that you specify will not be
symlinked with the local directory.
To specify a non-default location for the
gkeadm log file, use
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-systemnamespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grepor a similar tool to search for errors:
kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager