This page explains how to create additional user clusters. To create additional user clusters, you make a copy of the GKE on-prem configuration file used to deploy your clusters. You modify the copied file to meet your expectations for the new user clusters, and then you use the file to create the cluster.
You need to copy and modify a GKE on-prem configuration file for each additional user cluster you want to create.
Before you begin
- Be sure that an admin cluster is running. You created an admin cluster when you installed GKE on-prem.
- Locate the
config.yaml
file that was generated bygkectl
during installation. This file defines specifications for the admin cluster and user clusters. You'll copy and modify this file to create an additional user clusters. - Locate the admin cluster's
kubeconfig
file. You reference this file when you copy and modifyconfig.yaml
.
Limitations
Limitation | Description |
---|---|
Maximum and minimum limits for clusters and nodes | See Quotas and limits. Your environment's performance might impact these limits. |
Uniqueness for user cluster names | All user clusters registered to the same Google Cloud project must have unique names. |
Cannot deploy to more than one vCenter and/or vSphere datacenter | Currently, you can only deploy an admin cluster and a set of associated user clusters to a single vCenter and/or vSphere datacenter. You cannot deploy the same admin and user clusters to more than one vCenter and/or vSphere datacenter. |
Cannot declaratively change cluster configurations after creation | While you can create additional clusters and resize existing clusters, you cannot change an existing cluster through its configuration file. |
Verify that enough IP addresses are available
Be sure that you have enough IP addresses allocated for the new user cluster. Verifying that you have enough IP addresses depends on whether you're using a DHCP server or static IPs.
In addition, be sure that you have enough IP addresses allocated for your admin cluster. The admin cluster has one or three control-plane nodes for each user cluster, and the admin cluster will need one or three additional control-plane nodes for the user cluster you want to create. There must be enough IP addresses available for all of those control-plane nodes. To upgrade your admin cluster to have more IP addresses, see Upgrading--verify that enough IP addresses are available.
DHCP
Check that the DHCP server in the network in which the cluster will be created has enough IP addresses. There should be more IP addresses than there will be nodes running in the user cluster.
Static IPs
Check that you've allocated enough IP addresses on your load balancer, and be sure to specify these IP addresses during cluster creation.
Copy configuration file
Copy the GKE on-prem configuration file that you [generated] using
gkectl create-config
and [modified] to be appropriate for your environment.
Rename the copy to use another filename:
cp [CONFIG_FILE] [NEW_USER_CLUSTER_CONFIG]
where [NEW_USER_CLUSTER_CONFIG] is the name you choose for the copy
of the configuration file. For the purpose of these instructions, we'll call
this file create-user-cluster.yaml
.
In create-user-cluster.yaml
, you need to change the following fields:
admincluster
, the specification for the admin cluster. You completely remove theadmincluster
specification from the file.usercluster
, the specification for a user cluster.
In the following sections, you modify the admincluster
and usercluster
fields of create-user-cluster.yaml
, then use the file to create additional
user clusters.
Delete the admincluster
specification
If you want to create additonal user clusters from the existing admin cluster,
you need to delete the entire admincluster
specification.
To do so, simply delete the specification and all of its subfields.
Be sure not to delete the usercluster
specification, nor any of its subfields.
Modify the usercluster
specification
Make changes to the usercluster
fields as described in the following sections.
Change the user cluster's name
Change the user cluster name from the usercluster.clustername
field. New user
clusters should have names different from existing user clusters.
Reserve IP addresses for the user cluster's nodes
If you're using DHCP, make sure that you have enough IPs for the nodes to be created.
For static IP, you should modify the file provided to
usercluster.ipblockfilepath
that contains the predefined IP addresses for the
user cluster, or provide a different static IP YAML file with the IPs you want.
Reserve IP addresses for the load balancer
If you're using F5 BIG-IP load balancer, be sure to reserve two IP addresses for
the user cluster's load balancer control plane and ingress. The corresponding
fields are usercluster.vips.controlplanevip
and usercluster.vips.ingressvip
.
Change the machine requirements (optional)
If you need this user cluster's control plane or worker nodes to use a different amount of CPU or memory, or if you need the cluster to run additional or fewer nodes, set values for the following fields:
usercluster.masternode
usercluster.masternode.cpus
: Number of CPU cores to use.usercluster.masternode.memorymb
: Number of MB of memory to useusercluster.masternode.replicas
: Number of nodes of this type to run. Value must be1
or3
.
usercluster.workernode
usercluster.workernode.cpus
: Number of CPU cores to use.usercluster.workernode.memorymd
: Number of MB of memory to use.usercluster.workernode.replicas
: Number of nodes of this type to run.
Update the vcenter
specification
If you want to change certain aspects of your vSphere environment for your new
cluster, you can modify any of the following fields under vcenter
:
credentials.username
credentials.password
datastore
network
resourcepool
Do not modify the following fields:
credentials.address
datacenter
cluster
Create the user cluster
Now that you've populated a create-user-cluster.yaml
file, you're ready to use
that file to create an additional user cluster.
Run the following command:
gkectl create cluster --config create-user-cluster.yaml --kubeconfig [ADMIN_CLUSTER_KUBECONFIG]
where:
- create-user-cluster.yaml is the configuration file you just created. You might have chosen a different name for this file.
- [ADMIN_CLUSTER_KUBECONFIG] points to the existing admin cluster's
kubeconfig
.
Known issues
Version 1.1: Creating a second user cluster fails when using a vSAN datastore
Refer to GKE on-prem release notes.
Troubleshooting
For more information, refer to Troubleshoooting.
Diagnosing cluster issues using gkectl
Use gkectl diagnose
commands to identify cluster issues
and share cluster information with Google. See
Diagnosing cluster issues.
Default logging behavior
For gkectl
and gkeadm
it is sufficient to use the
default logging settings:
-
By default, log entries are saved as follows:
-
For
gkectl
, the default log file is/home/ubuntu/.config/gke-on-prem/logs/gkectl-$(date).log
, and the file is symlinked with thelogs/gkectl-$(date).log
file in the local directory where you rungkectl
. -
For
gkeadm
, the default log file islogs/gkeadm-$(date).log
in the local directory where you rungkeadm
.
-
For
- All log entries are saved in the log file, even if they are not printed in
the terminal (when
--alsologtostderr
isfalse
). - The
-v5
verbosity level (default) covers all the log entries needed by the support team. - The log file also contains the command executed and the failure message.
We recommend that you send the log file to the support team when you need help.
Specifying a non-default location for the log file
To specify a non-default location for the gkectl
log file, use
the --log_file
flag. The log file that you specify will not be
symlinked with the local directory.
To specify a non-default location for the gkeadm
log file, use
the --log_file
flag.
Locating Cluster API logs in the admin cluster
If a VM fails to start after the admin control plane has started, you can try debugging this by inspecting the Cluster API controllers' logs in the admin cluster:
Find the name of the Cluster API controllers Pod in the
kube-system
namespace, where [ADMIN_CLUSTER_KUBECONFIG] is the path to the admin cluster's kubeconfig file:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system get pods | grep clusterapi-controllers
Open the Pod's logs, where [POD_NAME] is the name of the Pod. Optionally, use
grep
or a similar tool to search for errors:kubectl --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] -n kube-system logs [POD_NAME] vsphere-controller-manager