Creating additional user clusters

This page explains how to create additional user clusters. To create additional user clusters, you make a copy of the GKE On-Prem configuration file used to deploy your clusters. You modify the copied file to meet your expectations for the new user clusters, and then you use the file to create the cluster.

You need to copy and modify a GKE On-Prem configuration file for each additional user cluster you want to create.

Before you begin

  • Be sure that an admin cluster is running. You created an admin cluster when you installed GKE On-Prem.
  • Locate the config.yaml file that was generated by gkectl during installation. This file defines specifications for the admin cluster and user clusters. You'll copy and modify this file to create an additional user clusters.
  • Locate the admin cluster's kubeconfig file. You reference this file when you copy and modify config.yaml.

Copy configuration file

Copy the GKE On-Prem configuration file that you generated using gkectl create-config and modified to be appropriate for your environment. Rename the copy to use another filename:

cp [CONFIG_FILE] [NEW_USER_CLUSTER_CONFIG]

where [NEW_USER_CLUSTER_CONFIG] is the name you choose for the copy of the configuration file. For the purpose of these instructions, we'll call this file create-user-cluster.yaml.

In create-user-cluster.yaml, you need to change the following fields:

  • admincluster, the specification for the admin cluster. You completely remove the admincluster specification from the file.
  • usercluster, the specification for a user cluster.

In the following sections, you modify the admincluster and usercluster fields of create-user-cluster.yaml, then use the file to create additional user clusters.

In the copied file, you may need to change the following field:

  • gkeplatformversion, which specifies the Kubernetes version to be run in the clusters. (This is not the GKE On-Prem platform version. In a future release, this field will be renamed.)

Delete the admincluster specification

If you want to create additonal user clusters from the existing admin cluster, you need to delete the entire admincluster specification.

To do so, simply delete the specification and all of its subfields.

Be sure not to delete the usercluster specification, nor any of its subfields.

Modify the usercluster specification

Make changes to the usercluster fields as described in the following sections.

Change the user cluster's name

Change the user cluster name from the usercluster.clustername field. New user clusters should have names different from existing user clusters.

Reserve IP addresses for the user cluster's nodes

If you're using DHCP, make sure that you have enough IPs for the nodes to be created.

For static IP, you should modify the file provided to usercluster.ipblockfilepath that contains the predefined IP addresses for the user cluster, or provide a different static IP YAML file with the IPs you want.

Reserve IP addresses for the load balancer

If you're using F5 BIG-IP load balancer, be sure to reserve two IP addresses for the user cluster's load balancer control plane and ingress. The corresponding fields are usercluster.vips.controlplanevip and usercluster.vips.ingressvip.

Change the machine requirements (optional)

If you need this user cluster's control plane or worker nodes to use a different amount of CPU or memory, or if you need the cluster to run additional or fewer nodes, set values for the following fields:

usercluster.masternode

  • usercluster.masternode.cpus: Number of CPU cores to use.
  • usercluster.masternode.memorymb: Number of MB of memory to use
  • usercluster.masternode.replicas: Number of nodes of this type to run. Value must be 1 or 3.

usercluster.workernode

  • usercluster.workernode.cpus: Number of CPU cores to use.
  • usercluster.workernode.memorymd: Number of MB of memory to use.
  • usercluster.workernode.replicas: Number of nodes of this type to run.

Create the user cluster

Now that you've populated a create-user-cluster.yaml file, you're ready to use that file to create an additional user cluster.

Run the following command:

gkectl create cluster --config create-user-cluster.yaml --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]

where:

  • create-user-cluster.yaml is the configuration file you just created. You might have chosen a different name for this file.
  • [ADMIN_CLUSTER_KUBECONFIG] points to the existing admin cluster's kubeconfig.

Troubleshooting

For more information, refer to Troubleshooting.

Diagnosing cluster issues using gkectl

Use gkectl diagnosecommands to identify cluster issues and share cluster information with Google. See Diagnosing cluster issues.

Default logging behavior

For gkectl and gkeadm it is sufficient to use the default logging settings:

  • By default, log entries are saved as follows:

    • For gkectl, the default log file is /home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log, and the file is symlinked with the logs/gkectl-$(date).log file in the local directory where you run gkectl.
    • For gkeadm, the default log file is logs/gkeadm-$(date).log in the local directory where you run gkeadm.
  • All log entries are saved in the log file, even if they are not printed in the terminal (when --alsologtostderr is false).
  • The -v5 verbosity level (default) covers all the log entries needed by the support team.
  • The log file also contains the command executed and the failure message.

We recommend that you send the log file to the support team when you need help.

Specifying a non-default location for the log file

To specify a non-default location for the gkectl log file, use the --log_file flag. The log file that you specify will not be symlinked with the local directory.

To specify a non-default location for the gkeadm log file, use the --log_file flag.

Locating Cluster API logs in the admin cluster

If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:

  1. Find the name of the Cluster API controllers Pod in the kube-system namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
  2. Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use grep or a similar tool to search for errors:

    kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager