Create a user cluster in the Google Cloud console

This page describes how to create an GKE on VMware user cluster by using the Google Cloud console. When you create a user cluster, the console automatically enables the Anthos On-Prem API in the Google Cloud project that you select for the cluster creation. The Anthos On-Prem API runs in Google Cloud's infrastructure, and the console uses this API to create the cluster in your vSphere data center. To manage your clusters, the Anthos On-Prem API must store metadata about your cluster's state in the Google Cloud region that you specify when creating the cluster. This metadata lets the Anthos On-Prem API manage the user cluster lifecycle and doesn't include workload-specific data.

If you prefer, you can create a user cluster by creating a user cluster configuration file and using gkectl, as described in Creating a user cluster.

If you want to use the Google Cloud console to manage the lifecycle of clusters that were created using gkectl, see Configure a user cluster to be managed by the Anthos On-Prem API.

Before you begin

This section describes the requirements for creating a user cluster in the Google Cloud console.

Grant IAM permissions

User cluster creation in the Google Cloud console is controlled by Identity and Access Management (IAM). If you aren't a project owner, minimally, you must be granted the following role in addition to the roles needed to view clusters and container resources in the Google Cloud console:

  • roles/gkeonprem.admin For details on the permissions included in this role, see GKE on-prem roles in the IAM documentation.

After the cluster is created, if your aren't a project owner and you want to use the Connect gateway to connect to the user cluster by the command line, the following roles are required:

  • roles/gkehub.gatewayAdmin: This role lets you access the Connect gateway API. If you only need read-only access to the cluster, the roles/gkehub.gatewayReader is sufficient.

  • roles/gkehub.viewer: This role lets you retrieve cluster credentials.

For details about the permissions included in these roles, see GKE Hub roles in the IAM documentation.

For information on granting the roles, see Manage access to projects, folders, and organizations.

Register the admin cluster

You must have an admin cluster and it must be registered to a fleet before you can create user clusters in the Google Cloud console.

Enable Admin Activity audit logs for the admin cluster

Admin activity logging to Cloud Audit Logs must be enabled on the admin cluster.

Enable system-level logging and monitoring for the admin cluster

Cloud Logging and Cloud Monitoring must be enabled on the admin cluster.

Supported admin cluster versions

Your admin cluster must be at version 1.10.0+ or 1.11.1+ to create a 1.11 user cluster. Note that 1.11.0 isn't supported. If you want to upgrade your admin cluster, see Upgrading GKE on VMware.

Required Google APIs

Make sure that all the required Google APIs are enabled in the fleet host project.

After the cluster is created, if you want to use the Connect gateway to connect to the user cluster by the command line, enable the Connect gateway API:

gcloud services enable --project=FLEET_HOST_PROJECT_ID  \
    connectgateway.googleapis.com

Command line access

After the cluster is created, if you want to use the Connect gateway to connect to the user cluster by the command line, do the following:

Ensure that you have the following command-line tools installed:

  • The latest version of the gcloud CLI.
  • kubectl for running commands against Kubernetes clusters. If you need to install kubectl, follow these instructions.

If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.

Create a user cluster

Most of the fields in the Google Cloud console correspond to the fields in the user cluster configuration file.

  1. In the Google Cloud console, go to the GKE Enterprise clusters page.

    Go to the GKE Enterprise clusters page

  2. Select the Google Cloud project that you want to create the cluster in. The selected project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet.

  3. Click Create Cluster.

  4. In the dialog box, click On-premises.

  5. Next to VMware vSphere, click Configure.

The following sections guide you through configuring the user cluster.

Prerequisites

This section in the Google Cloud console provides an overview of everything that must be done before you can create a user cluster. When you have confirmed that you have all the information that you need to create a user cluster, click Continue.

Cluster basics

Enter basic information about the cluster.

  1. Enter a Name for the user cluster.
  2. Under Admin cluster, select the admin cluster from the list. If you didn't specify a name for the admin cluster when you created it, the name is generated in the form gke-admin-[HASH]. If you don't recognize the admin cluster name, run the following command on your admin workstation:

    KUBECONFIG=ADMIN_CLUSTER_KUBECONFIG
    kubectl get OnPremAdminCluster -n kube-system -o=jsonpath='{.items[0].metadata.name}'
    

    If the admin cluster that you want to use isn't displayed, see the troubleshooting section The admin cluster isn't displayed on the Cluster basics drop-down list.

  3. In the GCP API Location field, select the Google Cloud region from the list. In addition to controlling the region where the Anthos On-Prem API runs, this setting controls the region in which the following is stored:

    • The user cluster metadata that the Anthos On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs
  4. Select the GKE on VMware version for your user cluster.

  5. As the cluster creator, you are granted cluster admin privileges to the cluster. Optionally, enter the email address of another user who will administer the cluster in the Cluster admin user field in the Authorization section.

    When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.

  6. Click Continue to go to the Networking section.

Networking

In this section, you specify the IP addresses for your cluster's nodes, Pods, and Services. A user cluster needs to have one IP address for each node and an additional IP address to be used for a temporary node during upgrade of the user cluster. For more information, see How many IP addresses does a user cluster need?.

  1. In the Node IPs section, select the IP mode for the user cluster. Select one of the following:

    • DHCP: Choose DHCP if you want your cluster nodes to get their IP address from a DHCP server.

    • Static: Choose Static if you want to provide static IP addresses for your cluster nodes, or if you want to set up manual load-balancing.

  2. If you selected DHCP skip to the next step to specify Service and Pod CIDRs. For Static IP mode, provide the following information:

    1. Enter the IP address of the Gateway for the user cluster.
    2. Enter the Subnet mask for the user cluster nodes.

    3. In the IP Addresses section, enter the IP addresses and optionally, the hostnames for the nodes in the user cluster. You can enter either an individual IP v4 address (such as 192.0.2.1) or a CIDR block of IPv4 addresses (such as 192.0.2.0/24).

      • If you enter a CIDR block, don't enter a hostname.
      • If you enter an individual IP address, you can optionally enter a hostname. If you don't enter a hostname, GKE on VMware uses the VM's name from vSphere as the hostname.
    4. Click + Add IP Address as needed to enter more IP addresses.

  3. In the Service and Pod CIDRs section, the console provides the following address ranges for your Kubernetes Services and Pods:

    • Service CIDR: 10.96.0.0/20
    • Pod CIDR: 192.168.0.0/16

    If you prefer to enter your own address ranges, see IP addresses for Pods and Services for best practices.

  4. If you selected Static IP mode, specify the following information in Host config section:

    1. Enter the IP addresses of the DNS servers.
    2. Enter the IP addresses of the NTP servers.
    3. Optionally, enter DNS search domains.
  5. Click Continue to go to the Load balancer section.

Load balancer

Choose the load balancer to set up for your cluster. See load balancer overview for more information.

Select the Load balancer type from the list.

Bundled with MetalLB

Configure bundled load balancing with MetalLB. This option requires minimal configuration. MetalLB runs directly on your cluster nodes and doesn't require extra VMs. For more information about the benefits of using MetalLB and how it compares to the other load balancing options, see Bundled load balancing with MetalLB.

  1. In the Address pools section, configure at least one address pool, as follows:

    1. Enter a name for the address pool.

    2. Enter an IP address range that contains the ingress VIP in either CIDR notation (ex. 192.0.2.0/26) or range notation (ex. 192.0.2.64-192.0.2.72). To specify a single IP address in a pool, use /32 in the CIDR notation (ex. 192.0.2.1/32).

    3. If the IP addresses for your Services of type LoadBalancer aren't in the same IP address range as the ingress VIP, click + Add IP Address Range and enter another address range.

      The IP addresses in each pool cannot overlap, and must be in the same subnet as the cluster nodes.

    4. Under Assignment of IP addresses, select one of the following:

      • Automatic: Choose this option if you want the MetalLB controller to automatically assign IP addresses from the address pool to Services of type LoadBalancer
      • Manual: Choose this option if you intend to use addresses from the pool to manually specify addresses for Services of type LoadBalancer
    5. Click Avoid buggy IP addresses if you want the MetalLB controller to not use addresses from the pool that end in .0 or .255. This avoids the problem of buggy consumer devices mistakenly dropping traffic sent to those special IP addresses.

    6. When you're finished click Done.

  2. If needed, click Add Address Pool.

  3. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server of the user cluster. The Kubernetes API server for the user cluster runs on a node in the admin cluster. This IP address must be in the same L2 domain as the admin cluster nodes. Don't add this address in the Address pools section.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy. You must add this to an address pool in the Address pools section.

  4. Click Continue.

F5 Big-IP load balancer

You can use F5 for the user cluster only if your admin cluster is using F5. Be sure to install and configure the F5 BIG-IP ADC before integrating it with GKE on VMware.

  1. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy.

  2. In the Address field, enter the address of your F5 BIG-IP load balancer.

  3. In the Partition field, enter the name of a BIG-IP partition that you created for your user cluster.

  4. In the sNAT pool name field, enter the name of your SNAT pool, if applicable.

  5. Click Continue.

Manual load balancer

Configure manual load balancing. In GKE on VMware, the Kubernetes API server, ingress proxy, and the add-on service for log aggregation are each exposed by a Kubernetes Service of type LoadBalancer. Choose your own nodePort values in the 30000 - 32767 range for these Services. For the ingress proxy, choose a nodePort value for both HTTP and HTTPS traffic. See Enabling manual load balancing mode for more information.

  1. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy.

  2. In the Control-plan node port field, enter a nodePort value for the Kubernetes API server. The Kubernetes API server of a user cluster runs on the admin cluster.

  3. In the Ingress HTTP node port field, enter a nodePort value for HTTP traffic to the ingress proxy.

  4. In the Ingress HTTPS node port field, enter a nodePort value for HTTPS traffic to the ingress proxy.

  5. In the Konnectivity server node port field, enter a nodePort value for the Konnectivity server.

  6. Click Continue.

Control Plane

Review the default values configured for the nodes in the admin cluster that run the control-plane components for your user cluster and adjust the values as needed.

  1. In the Control-plane node vCPUs field, enter the number of vCPUs (minimum 4) for each admin cluster node that serves as a control plane for your user cluster.
  2. In the Control-plane node memory field, enter the memory size in MiB (minimum 8192 and must be a multiple of 4) for each admin cluster node that serves as a control plane for your user cluster.
  3. Under Control-plane nodes, select the number of control-plane nodes for your user cluster. For example, you may select 1 control-plane node for a development environment and 3 control-planes nodes for high availability (HA), production environments. The user control plane resides on an admin cluster node, so make sure you have set aside enough IP addresses for each control plane node. For more information, see How many IP addresses does an admin cluster need?.

  4. Click Continue to go to the Features section.

Features

This section displays the features and operations that are enabled on the cluster.

The following are enabled automatically and can't be disabled:

The following are enabled by default, but you can disable them:

  • Enable vSphere CSI driver: Also called the vSphere Container Storage Plug-in. The Container Storage Interface (CSI) driver runs in a native Kubernetes cluster deployed in vSphere to provision persistent volumes on vSphere storage. For more information, see Using the vSphere Container Storage Interface driver.
  • Enable anti-affinity groups: VMware Distributed Resource Scheduler (DRS) anti-affinity rules are automatically created for your user cluster's nodes, causing them to be spread across at least 3 physical hosts in your data center. Make sure that your vSphere environment meets the requirements.

Node pools

Your cluster will be created with at least one node pool. A node pool is a template for the groups of nodes created in this cluster. For more information, see Creating and managing node pools .

  1. In the Node pool defaults section, complete the following:

    1. Enter the Node pool name or accept "default-pool" as the name.
    2. Enter the number of vCPUs for each node in the pool (minimum 4 per user cluster worker).
    3. Enter the memory size in mebibytes (MiB) for each node in the pool (minimum 8192 MiB per user cluster worker node and must be a multiple of 4).
    4. In the Nodes field, enter the number of nodes in the pool (minimum of 3). If you entered static IP addresses for the Node IPs in the Networking section, make sure that you entered enough IP addresses to accommodate these user cluster nodes.
    5. Select the OS image type: Ubuntu, Ubuntu Containerd, or COS.
    6. Enter the Boot disk size in gibibytes (GiB) (minimum 40 GiB).
    7. If you are using MetalLB as the load balancer, MetalLB must be enabled in at least one node pool. Either leave Use this node pool for MetalLB load balancing selected, or add another node pool to use for MetalLB.
  2. In the Node pool metadata (optional) section, if you want to add Kubernetes labels and taints, do the following:

    1. Click + Add Kubernetes Labels. Enter the Key and Value for the label. Repeat as needed.
    2. Click + Add Taint. Enter the Key, Value, and Effect for the taint. Repeat as needed.

Verify and complete

Click Verify and Complete to create the user cluster. It takes 15 minutes or more to create the user cluster. The console displays status messages as it verifies the settings and creates the cluster in your data center.

If an error is encountered verifying the settings, the console displays an error message that should be clear enough for you to fix the configuration issue and try again to create the cluster.

For more information about possible errors and how to fix them, see Troubleshoot user cluster creation in the Google Cloud console.

Connect to the user cluster

When you create a user cluster in the console, the cluster is configured with the Kubernetes role-based access control (RBAC) policies so that you can log in to the cluster using your Google Cloud identity. To configure the cluster and project to let other operators log in using their Google Cloud identity, see Setting up the Connect gateway.

All clusters have a canonical endpoint. The endpoint exposes the Kubernetes API server that kubectl and other services use to communicate with your cluster control plane over TCP port 443. This endpoint is not accessible on the public internet. If you have access to your cluster's private endpoint through your VPC, you can connect directly to the private endpoint and generate a kubeconfig file directly. Otherwise, you can use the Connect gateway. In this case kubectl uses Connect, which then securely forwards the traffic to the private endpoint on your behalf.

To access the user cluster from the command line, you need a kubeconfig file. There are two ways to get a kubeconfig file:

  • Use the Connect gateway to access the cluster from a computer that has the Google Cloud CLI installed on it. The Connect gateway lets you easily and securely manage your clusters. For more information, see Connecting to registered clusters with the Connect gateway.

  • For direct access to private endpoints, create a kubeconfig file on your admin workstation and manage the cluster from your admin workstation.

Be sure to wait until the console indicates that user cluster status is healthy.

Connect gateway

  1. Either initialize the gcloud CLI for use with the fleet host project, or run the following commands to log in with your Google account, set your fleet host project as the default, and update components:

    gcloud auth login
    gcloud config set project FLEET_HOST_PROJECT_ID
    gcloud components update
    
  2. Fetch the cluster credentials used to interact with Connect gateway. In the following command, replaceMEMBERSHIP_NAME with your cluster's name. In GKE on VMware, the membership name is the same as the cluster name.

    gcloud container fleet memberships get-credentials MEMBERSHIP_NAME
    

    This command returns a special Connect gateway-specific kubeconfig that lets you connect to the cluster through the gateway.

    After you have the necessary credentials, you can run commands using kubectl as you normally would for any Kubernetes cluster, and you don't need to specify the name of the kubeconfig file, for example:

    kubectl get namespaces
    

Admin workstation

To create the user cluster kubeconfig file on your admin workstation, run the following command to save a new kubeconfig file for the user cluster locally. Replace the following:

  • CLUSTER_NAME: the name of the newly-created user cluster
  • ADMIN_CLUSTER_KUBECONFIG: the path to the admin cluster kubeconfig file
  • USER_CLUSTER_KUBECONFIG: the name of the user cluster kubeconfig file that the command outputs
kubectl get secret admin \
  --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
  -n CLUSTER_NAME \
  -o=jsonpath='{.data.admin\.conf}' | base64 -d > USER_CLUSTER_KUBECONFIG

After the file has been saved, you can begin accessing the user cluster using kubectl on the admin workstation, for example:

kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get namespaces

What's next