Create a user cluster using Anthos On-Prem API clients

Stay organized with collections Save and categorize content based on your preferences.

This page describes how to create a user cluster by using the Google Cloud console or the Google Cloud CLI (gcloud CLI). Both of these standard Google Cloud clients use the Anthos On-Prem API to create the cluster.

What is the Anthos On-Prem API?

The Anthos On-Prem API is a Google Cloud-hosted API that lets you manage the lifecycle of your on-premises user clusters using standard Google Cloud applications. The Anthos On-Prem API runs in Google Cloud's infrastructure. The Google Cloud console and the gcloud CLI are clients of the API, and they use the API to create the cluster in your vSphere data center. To manage the lifecycle of your clusters, the Anthos On-Prem API must store metadata about your cluster's state in the Google Cloud region that you specify when creating the cluster. This metadata lets the API manage the user cluster lifecycle and doesn't include workload-specific data.

If you prefer, you can create a user cluster by creating a user cluster configuration file and using gkectl, as described in Creating a user cluster using gkectl.

If you want to use the Google Cloud console or the gcloud CLI to manage the lifecycle of clusters that were created using gkectl, see Configure a user cluster to be managed by the Anthos On-Prem API.

Before you begin

This section describes the requirements for creating a user cluster using Anthos On-Prem API clients.

Grant IAM permissions

User cluster creation in the Google Cloud console is controlled by Identity and Access Management (IAM). If you aren't a project owner, minimally, you must be granted the following role in addition to the roles needed to view clusters and container resources in the Google Cloud console:

  • roles/gkeonprem.admin For details on the permissions included in this role, see GKE on-prem roles in the IAM documentation.

After the cluster is created, if your aren't a project owner and you want to use the Connect gateway to connect to the user cluster by the command line, the following roles are required:

  • roles/gkehub.gatewayAdmin: This role lets you access the Connect gateway API. If you only need read-only access to the cluster, the roles/gkehub.gatewayReader is sufficient.

  • roles/gkehub.viewer: This role lets you retrieve cluster credentials.

For details about the permissions included in these roles, see GKE Hub roles in the IAM documentation.

For information on granting the roles, see Manage access to projects, folders, and organizations.

Register the admin cluster

You must have an admin cluster and it must be registered to a fleet before you can create user clusters using Anthos On-Prem API clients.

Enable Admin Activity audit logs for the admin cluster

Admin activity logging to Cloud Audit Logs must be enabled on the admin cluster.

Enable system-level logging and monitoring for the admin cluster

Cloud Logging and Cloud Monitoring must be enabled on the admin cluster.

Required Google APIs

Make sure that all the required Google APIs are enabled in the fleet host project.

If you will be using the gcloud CLI to create the cluster, you must enable the Anthos On-Prem API:

gcloud services enable --project FLEET_HOST_PROJECT_ID \
    gkeonprem.googleapis.com

Command line access

After the cluster is created, if you want to use the Connect gateway to connect to the user cluster by the command line, do the following:

Ensure that you have the following command-line tools installed:

  • The latest version of the gcloud CLI.
  • kubectl for running commands against Kubernetes clusters. If you need to install kubectl, follow these instructions.

If you are using Cloud Shell as your shell environment for interacting with Google Cloud, these tools are installed for you.

Create a user cluster

You can use either the Google Cloud console or the Google Cloud CLI (gcloud CLI) to create a user cluster that is managed by the Anthos On-Prem API. If you are just getting started learning about installing Anthos clusters on VMware, you might find the console easier to use than the gcloud CLI.

After you are more familiar with the information that you need to provide for the creating clusters, you might find the gcloud CLI more convenient because you can save the command with its arguments to a text file. If you are using a CI/CD tool, such as Cloud Build, you can use the gcloud commands to create a cluster and node pool and specify the --async and --impersonate-service-account flags to automate the creation.

Console

Most of the fields in the Google Cloud console correspond to the fields in the user cluster configuration file.

  1. In the Google Cloud console, go to the Anthos clusters page.

    Go to the Anthos clusters page

  2. Select the Cloud project that you want to create the cluster in. The selected project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet.

  3. Click Create Cluster.

  4. In the dialog box, click On-premises.

  5. Next to VMware vSphere, click Configure.

The following sections guide you through configuring the user cluster.

gcloud CLI

You use the following command to create a user cluster:

gcloud alpha container vmware clusters create

After creating the cluster, you need to create at least one node pool using the following command:

gcloud alpha container vmware node-pools create

Most of the flags for creating the cluster and the node pool correspond to the fields in the user cluster configuration file. To help you get started, you can test the complete command in the examples section. For more information about the flags, see the sections that follow the examples, or refer to the gcloud CLI reference.

Before you begin

  1. Log in with your Google account.

    gcloud auth login
    
  2. Make sure to update components:

    gcloud components update
    

Examples

This section provides two examples of commands that create clusters using the MetalLB load balancer. The examples don't include the flags for configuring the control plane nodes, which are set to the default values for vCPUs (4), memory (8192), and number on nodes (1). After the cluster is running, you must add a node pool before deploying workloads.

DHCP

Be sure to scroll over if needed to fill in the ADMIN_CLUSTER_NAME placeholder for the --admin-cluster-membership flag.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --project=FLEET_HOST_PROJECT_ID \
  --admin-cluster-membership=projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME \
  --location=REGION \
  --version=VERSION \
  --admin-users=YOUR_EMAIL_ADDRESS \
  --admin-users=ANOTHER_EMAIL_ADDRESS \
  --enable-dhcp \
  --service-address-cidr-blocks=SERVICE_CIDR_BLOCK \
  --pod-address-cidr-blocks=POD_CIDR_BLOCK \
  --metal-lb-config-from-file=METAL_LB_CONFIG_FROM_FILE \
  --control-plane-vip=CONTROL_PLANE_VIP \
  --ingress-vip=INGRESS_VIP

Replace the following:

  • USER_CLUSTER_NAME: A name of your choice for your user cluster. The name can't be changed after the cluster is created. The name must:
    • contain at most 40 characters
    • contain only lowercase alphanumeric characters or a hyphen (-)
    • start with an alphabetic character
    • end with an alphanumeric character
  • FLEET_HOST_PROJECT_ID: The ID of the project that you want to create the cluster in. The specified project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet. The fleet host project can't be changed after the cluster is created.
  • ADMIN_CLUSTER_NAME: The name of the admin cluster that manages the user cluster. The admin cluster name is the last segment of the fully-specified cluster name that uniquely identifies the cluster in Google Cloud: projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME
  • REGION: The Google Cloud region in which the Anthos On-Prem API runs. Specify us-west1 or another supported regions. The region can't be changed after the cluster is created. This setting specifies the region where the following are stored:
    • The user cluster metadata that the Anthos On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs

    The cluster name, project, and location together uniquely identify the cluster in Google Cloud.

  • VERSION: The Anthos clusters on VMware version for your user cluster.
  • YOUR_EMAIL_ADDRESS and ANOTHER_EMAIL_ADDRESS: If you don't include the --admin-users flag, as the cluster creator, by default you are granted cluster admin privileges. But If you include --admin-users to designate another user as an administrator, you override the default and need to include both your email address and the email address of the other administrator. For example, to add two administrators:
            --admin-users=sara@example.com \
            --admin-users=amal@example.com

    When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.

  • SERVICE_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Services in your cluster.

  • POD_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Pods in your cluster.

  • METAL_LB_CONFIG_FROM_FILE: The path to a MetalLB address pool configuration file. The file contains an array of objects, each of which holds information about an address pool to be used by the MetalLB load balancer.

    Following is an example of the configuration file:

      metalLBConfig:
        addressPools:
        - pool: lb-pool-1
          avoidBuggyIps: True
          addresses:
          - 10.251.133.0/24
          - 10.251.134.80/32
          - 10.251.134.81/32
        - pool: lb-pool-2
          manualAssign: True
          addresses:
          - 10.251.131.70/32
    
  • CONTROL_PLANE_VIP: The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.

  • INGRESS_VIP: The IP address that you have chosen to configure on the load balancer for the ingress proxy.

Static IPs

Be sure to scroll over if needed to fill in the ADMIN_CLUSTER_NAME placeholder for the --admin-cluster-membership flag.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --project=FLEET_HOST_PROJECT_ID \
  --admin-cluster-membership=projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME \
  --location=REGION \
  --version=VERSION \
  --admin-users=YOUR_EMAIL_ADDRESS \
  --admin-users=ANOTHER_EMAIL_ADDRESS \
  --static-ip-config-from-file=STATIC_IP_CONFIG_FROM_FILE \
  --dns-servers=DNS_SERVER,... \
  --dns-search-domains=DNS_SEARCH_DOMAIN,... \
  --ntp-servers=NTP_SERVER,... \
  --service-address-cidr-blocks=SERVICE_CIDR_BLOCK \
  --pod-address-cidr-blocks=POD_CIDR_BLOCK \
  --metal-lb-config-from-file=METAL_LB_CONFIG_FROM_FILE \
  --control-plane-vip=CONTROL_PLANE_VIP \
  --ingress-vip=INGRESS_VIP

Replace the following:

  • USER_CLUSTER_NAME: A name of your choice for your user cluster. The name can't be changed after the cluster is created. The name must:
    • contain at most 40 characters
    • contain only lowercase alphanumeric characters or a hyphen (-)
    • start with an alphabetic character
    • end with an alphanumeric character
  • FLEET_HOST_PROJECT_ID: The ID of the project that you want to create the cluster in. The specified project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet. The fleet host project can't be changed after the cluster is created.
  • ADMIN_CLUSTER_NAME: The name of the admin cluster that manages the user cluster. The admin cluster name is the last segment of the fully-specified cluster name that uniquely identifies the cluster in Google Cloud: projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME
  • REGION: The Google Cloud region in which the Anthos On-Prem API runs. Specify us-west1 or another supported regions. The region can't be changed after the cluster is created. This setting specifies the region where the following are stored:
    • The user cluster metadata that the Anthos On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs

    The cluster name, project, and location together uniquely identify the cluster in Google Cloud.

  • VERSION: The Anthos clusters on VMware version for your user cluster.
  • YOUR_EMAIL_ADDRESS and ANOTHER_EMAIL_ADDRESS: If you don't include the --admin-users flag, as the cluster creator, by default you are granted cluster admin privileges. But If you include --admin-users to designate another user as an administrator, you override the default and need to include both your email address and the email address of the other administrator. For example, to add two administrators:
            --admin-users=sara@example.com \
            --admin-users=amal@example.com

    When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.

  • STATIC_IP_CONFIG_FROM_FILE: The path to the static IP configuration file for your cluster.

Following is an example of a static IP configuration file:

  staticIPConfig:
    ipBlocks:
    - gateway: "172.16.23.254"
      netmask: "255.255.252.0"
      ips:
      - hostname: user-vm-1
        ip: 172.16.20.10
      - hostname: user-vm-2
        ip: 172.16.20.11
    - gateway: "172.16.23.255"
      netmask: "255.255.252.0"
      ips:
      - hostname: user-vm-3
        ip: 172.16.20.12
      - hostname: extra-vm
        ip: 172.16.20.13
  • DNS_SERVER: A comma-separated list of the IP addresses of DNS servers for the VMs.

  • DNS_SEARCH_DOMAIN: A comma-separated list of the DNS search domains for the hosts to use.

  • NTP_SERVER: A comma-separated list of the IP addresses of time servers for the VMs to use.

  • SERVICE_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Services in your cluster.

  • POD_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Pods in your cluster.

  • METAL_LB_CONFIG_FROM_FILE: The path to a MetalLB address pool configuration file. The file contains an array of objects, each of which holds information about an address pool to be used by the MetalLB load balancer.

    Following is an example of the configuration file:

      metalLBConfig:
        addressPools:
        - pool: lb-pool-1
          avoidBuggyIps: True
          addresses:
          - 10.251.133.0/24
          - 10.251.134.80/32
          - 10.251.134.81/32
        - pool: lb-pool-2
          manualAssign: True
          addresses:
          - 10.251.131.70/32
    
  • CONTROL_PLANE_VIP: The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.

  • INGRESS_VIP: The IP address that you have chosen to configure on the load balancer for the ingress proxy.

The following sections provide more information about the flags for creating a cluster. The information about the flags is grouped by the corresponding settings in the console.

After you create the cluster, see the node pool command to create a node pool for your newly-created cluster.

Cluster basics

Enter basic information about the cluster.

Console

  1. Enter a Name for the user cluster.
  2. Under Admin cluster, select the admin cluster from the list. If you didn't specify a name for the admin cluster when you created it, the name is generated in the form gke-admin-[HASH]. If you don't recognize the admin cluster name, run the following command on your admin workstation:

    KUBECONFIG=ADMIN_CLUSTER_KUBECONFIG
    kubectl get OnPremAdminCluster -n kube-system -o=jsonpath='{.items[0].metadata.name}'
    

    If the admin cluster that you want to use isn't displayed, see the troubleshooting section The admin cluster isn't displayed on the Cluster basics drop-down list.

  3. In the GCP API Location field, select the Google Cloud region from the list. This setting specifies the region where the Anthos On-Prem API runs, and the region in which the following are stored:

    • The user cluster metadata that the Anthos On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs

    The cluster name, project, and location together uniquely identify Google Cloud.

  4. Select the Anthos clusters on VMware version for your user cluster.

  5. As the cluster creator, you are granted cluster admin privileges to the cluster. Optionally, enter the email address of another user who will administer the cluster in the Cluster admin user field in the Authorization section.

    When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.

  6. Click Continue to go to the Networking section.

gcloud CLI

This section describes the flags that provide basic information about the cluster.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --project=FLEET_HOST_PROJECT_ID \
  --admin-cluster-membership=projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME \
  --location=REGION \
  --version=VERSION \
  --admin-users=YOUR_EMAIL_ADDRESS \
  --admin-users=ANOTHER_EMAIL_ADDRESS \
  --NETWORKING_FLAGS \
  --LOAD_BALANCER_FLAGS \
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Replace the following:

  • USER_CLUSTER_NAME: A name of your choice for your user cluster. The name can't be changed after the cluster is created. The name must:
    • contain at most 40 characters
    • contain only lowercase alphanumeric characters or a hyphen (-)
    • start with an alphabetic character
    • end with an alphanumeric character
  • FLEET_HOST_PROJECT_ID: The ID of the project that you want to create the cluster in. The specified project is also used as the fleet host project. This must be the same project that the admin cluster is registered to. After the user cluster is created, it is automatically registered to the selected project's fleet. The fleet host project can't be changed after the cluster is created.
  • ADMIN_CLUSTER_NAME: The name of the admin cluster that manages the user cluster. The admin cluster name is the last segment of the fully-specified cluster name that uniquely identifies the cluster in Google Cloud: projects/FLEET_HOST_PROJECT_ID/locations/global/memberships/ADMIN_CLUSTER_NAME
  • REGION: The Google Cloud region in which the Anthos On-Prem API runs. Specify us-west1 or another supported regions. The region can't be changed after the cluster is created. This setting specifies the region where the following are stored:
    • The user cluster metadata that the Anthos On-Prem API needs to manage the cluster lifecycle
    • The Cloud Logging and Cloud Monitoring data of system components
    • The Admin Audit log created by Cloud Audit Logs

    The cluster name, project, and location together uniquely identify the cluster in Google Cloud.

  • VERSION: The Anthos clusters on VMware version for your user cluster.
  • YOUR_EMAIL_ADDRESS and ANOTHER_EMAIL_ADDRESS: If you don't include the --admin-users flag, as the cluster creator, by default you are granted cluster admin privileges. But If you include --admin-users to designate another user as an administrator, you override the default and need to include both your email address and the email address of the other administrator. For example, to add two administrators:
            --admin-users=sara@example.com \
            --admin-users=amal@example.com

    When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant you and other admin users the Kubernetes clusterrole/cluster-admin role, which provides full access to every resource in the cluster in all namespaces.

Networking

Console

In this section, you specify the IP addresses for your cluster's nodes, Pods, and Services. A user cluster needs to have one IP address for each node and an additional IP address for a temporary node that is needed during cluster upgrades, updates, and auto repair. For more information, see How many IP addresses does a user cluster need?.

  1. In the Node IPs section, select the IP mode for the user cluster. Select one of the following:

    • DHCP: Choose DHCP if you want your cluster nodes to get their IP address from a DHCP server.

    • Static: Choose Static if you want to provide static IP addresses for your cluster nodes, or if you want to set up manual load-balancing.

  2. If you selected DHCP skip to the next step to specify Service and Pod CIDRs. For Static IP mode, provide the following information:

    1. Enter the IP address of the Gateway for the user cluster.
    2. Enter the Subnet mask for the user cluster nodes.

    3. In the IP Addresses section, enter the IP addresses and optionally, the hostnames for the nodes in the user cluster. You can enter either an individual IP v4 address (such as 192.0.2.1) or a CIDR block of IPv4 addresses (such as 192.0.2.0/24).

      • If you enter a CIDR block, don't enter a hostname.
      • If you enter an individual IP address, you can optionally enter a hostname. If you don't enter a hostname, Anthos clusters on VMware uses the VM's name from vSphere as the hostname.
    4. Click + Add IP Address as needed to enter more IP addresses.

  3. In the Service and Pod CIDRs section, the console provides the following address ranges for your Kubernetes Services and Pods:

    • Service CIDR: 10.96.0.0/20
    • Pod CIDR: 192.168.0.0/16

    If you prefer to enter your own address ranges, see IP addresses for Pods and Services for best practices.

  4. If you selected Static IP mode, specify the following information in Host config section:

  5. Enter the IP addresses of the DNS servers.

  6. Enter the IP addresses of the NTP servers.

  7. Optionally, enter DNS search domains.

  8. Click Continue to go to the Load balancer section.

gcloud CLI

This section describes the networking flags. You use these flags to specify the IP addresses for your cluster's nodes, Pods, and Services. A user cluster needs to have one IP address for each node and an additional IP address for a temporary node that is needed during cluster upgrades, updates, and auto repair. For more information, see How many IP addresses does a user cluster need?.

DHCP

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --enable-dhcp \
  --service-address-cidr-blocks=SERVICE_CIDR_BLOCK \
  --pod-address-cidr-blocks=POD_CIDR_BLOCK \
  --LOAD_BALANCER_FLAGS \
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Include --enable-dhcp if you want your cluster nodes to get their IP address from a DHCP server that you provide. Don't include this flag if you want to provide static IP addresses for your cluster nodes, or if you want to set up manual load balancing.

Static IPs

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --static-ip-config-from-file=STATIC_IP_CONFIG_FROM_FILE \
  --dns-servers=DNS_SERVER,... \
  --dns-search-domains=DNS_SEARCH_DOMAIN,... \
  --ntp-servers=NTP_SERVER,... \
  --service-address-cidr-blocks=SERVICE_CIDR_BLOCK \
  --pod-address-cidr-blocks=POD_CIDR_BLOCK \
  --LOAD_BALANCER_FLAGS \
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Replace the following:

  • STATIC_IP_CONFIG_FROM_FILE: The path to the static IP configuration file for your cluster. With the exception of the first line, the structure of this file is the same as the IP block file. The path is relative to your current working directory. For example, to specify a file in your current working directory: --static-ip-config-from-file static-ip-config.yaml

Following is an example of a static IP configuration file:

  staticIPConfig:
    ipBlocks:
    - gateway: "172.16.23.254"
      netmask: "255.255.252.0"
      ips:
      - hostname: user-vm-1
        ip: 172.16.20.10
      - hostname: user-vm-2
        ip: 172.16.20.11
    - gateway: "172.16.23.255"
      netmask: "255.255.252.0"
      ips:
      - hostname: user-vm-3
        ip: 172.16.20.12
      - hostname: extra-vm
        ip: 172.16.20.13
  • DNS_SERVER: A comma-separated list of the IP addresses of DNS servers for the VMs.

  • DNS_SEARCH_DOMAIN: A comma-separated list of the DNS search domains for the hosts to use. These domains are used as part of a domain search list.

  • NTP_SERVER: A comma-separated list of the IP addresses of time servers for the VMs to use.

For both DHCP and static IPs, replace the following:

  • SERVICE_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Services in your cluster. Must be at least a /24 range.

    Example: --service-address-cidr-blocks=10.96.0.0/20

  • POD_CIDR_BLOCK: A range of IP addresses, in CIDR format, to be used for Pods in your cluster. Must be at least a /18 range.

    Example: --pod-address-cidr-blocks=92.168.0.0/16

Load balancer

Console

Choose the load balancer to set up for your cluster. See load balancer overview for more information.

Select the Load balancer type from the list.

Bundled with MetalLB

Configure bundled load balancing with MetalLB. You can use MetalLB for the user cluster only if your admin cluster is using SeeSaw or MetalLB. This option requires minimal configuration. MetalLB runs directly on your cluster nodes and doesn't require extra VMs. For more information about the benefits of using MetalLB and how it compares to the other load balancing options, see Bundled load balancing with MetalLB.

  1. In the Address pools section, configure at least one address pool, as follows:

    1. Enter a name for the address pool.

    2. Enter an IP address range that contains the ingress VIP in either CIDR notation (ex. 192.0.2.0/26) or range notation (ex. 192.0.2.64-192.0.2.72). To specify a single IP address in a pool, use /32 in the CIDR notation (ex. 192.0.2.1/32).

    3. If the IP addresses for your Services of type LoadBalancer aren't in the same IP address range as the ingress VIP, click + Add IP Address Range and enter another address range.

      The IP addresses in each pool cannot overlap, and must be in the same subnet as the cluster nodes.

    4. Under Assignment of IP addresses, select one of the following:

      • Automatic: Choose this option if you want the MetalLB controller to automatically assign IP addresses from the address pool to Services of type LoadBalancer
      • Manual: Choose this option if you intend to use addresses from the pool to manually specify addresses for Services of type LoadBalancer
    5. Click Avoid buggy IP addresses if you want the MetalLB controller to not use addresses from the pool that end in .0 or .255. This avoids the problem of buggy consumer devices mistakenly dropping traffic sent to those special IP addresses.

    6. When you're finished click Done.

  2. If needed, click Add Address Pool.

  3. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server of the user cluster. The Kubernetes API server for the user cluster runs on a node in the admin cluster. This IP address must be in the same L2 domain as the admin cluster nodes. Don't add this address in the Address pools section.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy. You must add this to an address pool in the Address pools section.

  4. Click Continue.

F5 Big-IP load balancer

You can use F5 for the user cluster only if your admin cluster is using F5. Be sure to install and configure the F5 BIG-IP ADC before integrating it with Anthos clusters on VMware. See F5 BIG-IP ADC installation guide for more information. The F5 username and password are inherited from the admin cluster.

  1. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy.

  2. In the Address field, enter the address of your F5 BIG-IP load balancer.

  3. In the Partition field, enter the name of a BIG-IP partition that you created for your user cluster.

  4. In the sNAT pool name field, enter the name of your SNAT pool, if applicable.

  5. Click Continue.

Manual load balancer

Configure manual load balancing. You can use a manual load balancer for the user cluster only if your admin cluster uses a manual load balancer. In Anthos clusters on VMware, the Kubernetes API server, ingress proxy, and the add-on service for log aggregation are each exposed by a Kubernetes Service of type LoadBalancer. Choose your own nodePort values in the 30000 - 32767 range for these Services. For the ingress proxy, choose a nodePort value for both HTTP and HTTPS traffic. See Enabling manual load balancing mode for more information.

  1. In the Virtual IPs section, enter the following:

    • Control plane VIP: The destination IP address to be used for traffic sent to the Kubernetes API server.

    • Ingress VIP: The IP address to be configured on the load balancer for the ingress proxy.

  2. In the Control-plan node port field, enter a nodePort value for the Kubernetes API server. The Kubernetes API server of a user cluster runs on the admin cluster.

  3. In the Ingress HTTP node port field, enter a nodePort value for HTTP traffic to the ingress proxy.

  4. In the Ingress HTTPS node port field, enter a nodePort value for HTTPS traffic to the ingress proxy.

  5. In the Konnectivity server node port field, enter a nodePort value for the Konnectivity server.

  6. Click Continue.

gcloud CLI

Choose the load balancer to set up for your cluster. See load balancer overview for more information.

Bundled with MetalLB

Configure bundled load balancing with MetalLB. You can use MetalLB for the user cluster only if your admin cluster is using SeeSaw or MetalLB. This option requires minimal configuration. MetalLB runs directly on your cluster nodes and doesn't require extra VMs. For more information about the benefits of using MetalLB and how it compares to the other load balancing options, see Bundled load balancing with MetalLB.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --NETWORKING_FLAGS \
  --metal-lb-config-from-file=METAL_LB_CONFIG_FROM_FILE \
  --control-plane-vip=CONTROL_PLANE_VIP \
  --ingress-vip=INGRESS_VIP
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Replace the following:

  • METAL_LB_CONFIG_FROM_FILE: The path to a MetalLB address pool configuration file. The file contains an array of objects, each of which holds information about an address pool to be used by the MetalLB load balancer. The path is relative to your current working directory, for example: --metal-lb-config-from-file=metal-lb-config.yaml

Following is an example of the configuration file:

  metalLBConfig:
    addressPools:
    - pool: lb-pool-1
      avoidBuggyIps: True
      addresses:
      - 10.251.133.0/24
      - 10.251.134.80/32
      - 10.251.134.81/32
    - pool: lb-pool-2
      manualAssign: True
      addresses:
      - 10.251.131.70/32
  • metalLBConfig.addressPools[i].pool: A name of your choice for the pool
  • metalLBConfig.addressPools[i].addresses: An array of strings, each of which is a range of addresses. Each range must be in CIDR format or hyphenated-range format. To specify a single IP address in a pool, use /32 in the CIDR notation (ex. 192.0.2.1/32).
  • metalLBConfig.addressPools[i].avoidBuggyIPs: If you set this to True, the MetalLB controller will not assign IP addresses ending in .0 or .255 to Services. This avoids the problem of buggy consumer devices mistakenly dropping traffic sent to those special IP addresses. If not specified, defaults to False.
  • metalLBConfig.addressPools[i].manualAssign: If you do not want the MetalLB controller to automatically assign IP addresses from this pool to Services, set this to True. Then a developer can create a Service of type LoadBalancer and manually specify one of the addresses from the pool. If not specified, manualAssign is set to False.

  • CONTROL_PLANE_VIP: The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.

    Example: --control-plane-vip=203.0.113.3

  • INGRESS_VIP: The IP address that you have chosen to configure on the load balancer for the ingress proxy.

    Example: --ingress-vip=10.251.134.80

F5 Big-IP load balancer

You can use F5 for the user cluster only if your admin cluster is using F5. Be sure to install and configure the F5 BIG-IP ADC before integrating it with Anthos clusters on VMware. See F5 BIG-IP ADC installation guide for more information. The F5 username and password are inherited from the admin cluster.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --NETWORKING_FLAGS \
  --f5-config-address=F5_CONFIG_ADDRESS \
  --f5-config-partition=F5_CONFIG_PARTITION \
  --f5-config-snat-pool=F5_CONFIG_SNAT_POOL \
  --control-plane-vip=CONTROL_PLANE_VIP \
  --ingress-vip=INGRESS_VIP
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Replace the following:

  • F5_CONFIG_ADDRESS: The address of your F5 BIG-IP load balancer.

  • F5_CONFIG_PARTITION:The name of a BIG-IP partition that you created for your user cluster.

  • F5_CONFIG_SNAT_POOL: The name of your SNAT pool, if applicable.

  • CONTROL_PLANE_VIP: The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.

Example: --control-plane-vip=203.0.113.3

  • INGRESS_VIP: The IP address that you have chosen to configure on the load balancer for the ingress proxy.

    Example: --ingress-vip=10.251.132.9

Manual load balancer

Configure manual load balancing. You can use a manual load balancer for the user cluster only if your admin cluster uses a manual load balancer. In Anthos clusters on VMware, the Kubernetes API server, ingress proxy, and the add-on service for log aggregation are each exposed by a Kubernetes Service of type LoadBalancer. Choose your own nodePort values in the 30000 - 32767 range for these Services. For the ingress proxy, choose a nodePort value for both HTTP and HTTPS traffic. See Enabling manual load balancing mode for more information.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --NETWORKING_FLAGS \
  --control-plane-vip=CONTROL_PLANE_VIP \
  --control-plane-node-port=CONTROL_PLANE_NODE_PORT \
  --ingress-vip=INGRESS_VIP \
  --ingress-http-node-port=INGRESS_HTTP_NODE_PORT \
  --ingress-https-node-port=INGRESS_HTTPS_NODE_PORT \
  --konnectivity-server-node-port=KONNECTIVITY_SERVER_NODE_PORT \
  --CONTROL_PLANE_FLAGS \
  --FEATURES_FLAGS

Replace the following:

  • CONTROL_PLANE_VIP: The IP address that you have chosen to configure on the load balancer for the Kubernetes API server of the user cluster.

    Example: --control-plane-vip=203.0.113.3

  • CONTROL_PLANE_NODE_PORT: A nodePort value for the Kubernetes API server.

  • INGRESS_VIP: The IP address that you have chosen to configure on the load balancer for the ingress proxy.

    Example: --ingress-vip=203.0.113.4

  • INGRESS_HTTP_NODE_PORT: A nodePort value for HTTP traffic to the ingress proxy.

  • INGRESS_HTTPS_NODE_PORTP: A nodePort value for HTTPS traffic to the ingress proxy.

  • KONNECTIVITY_SERVER_NODE_PORT: A nodePort value for the Konnectivity server.

Control Plane

Console

Review the default values configured for the nodes in the admin cluster that run the control-plane components for your user cluster and adjust the values as needed.

  1. In the Control-plane node vCPUs field, enter the number of vCPUs (minimum 4) for each admin cluster node that serves as a control plane for your user cluster.
  2. In the Control-plane node memory field, enter the memory size in MiB (minimum 8192 and must be a multiple of 4) for each admin cluster node that serves as a control plane for your user cluster.
  3. Under Control-plane nodes, select the number of control-plane nodes for your user cluster. For example, you may select 1 control-plane node for a development environment and 3 control-planes nodes for high availability (HA), production environments. The user control plane resides on an admin cluster node, so make sure you have set aside enough IP addresses for each control plane node. For more information, see How many IP addresses does an admin cluster need?.

  4. Click Continue to go to the Features section.

gcloud CLI

This section describes the flags for the nodes in the admin cluster that serve as control-plane nodes for this user cluster.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --NETWORKING_FLAGS \
  --LOAD_BALANCER_FLAGS \
  --cpus=vCPUS \
  --memory=MEMORY \
  --replicas=NODES \
  --enable-auto-resize
  --FEATURES_FLAGS

Replace the following:

  • vCPUS: The number of vCPUs (minimum 4) for each admin cluster node that serves as a control plane for your user cluster. If not specified, the default is 4 vCPUs.

  • MEMORY: The memory size in mebibytes (MiB) for each admin cluster node that serves as a control plane for your user cluster. The minimum value is 8192 and it must be a multiple of 4. If not specified, the default is 8192.

  • NODES: The number of control-plane nodes for your user cluster. For example, you may select 1 control-plane node for a development environment and 3 control-planes nodes for high availability (HA), production environments. The user control plane resides on an admin cluster node, so make sure you have set aside enough IP addresses for each control plane node. For more information, see How many IP addresses does an admin cluster need?. If not specified, the default is 1.

  • Optional: If you want to enable automatic resizing of the control-plane nodes for the user cluster, include --enable-auto-resize.

    For more information, see Enable node resizing for the control-plane nodes of a user cluster.

Features

Console

This section displays the features and operations that are enabled on the cluster.

The following are enabled automatically and can't be disabled:

The following are enabled by default, but you can disable them:

  • Enable vSphere CSI driver: Also called the vSphere Container Storage Plug-in. The Container Storage Interface (CSI) driver runs in a native Kubernetes cluster deployed in vSphere to provision persistent volumes on vSphere storage. For more information, see Using the vSphere Container Storage Interface driver.
  • Enable anti-affinity groups: VMware Distributed Resource Scheduler (DRS) anti-affinity rules are automatically created for your user cluster's nodes, causing them to be spread across at least 3 physical hosts in your data center. Make sure that your vSphere environment meets the requirements.

gcloud CLI

This section describes how to disable some vSphere features.

gcloud alpha container vmware clusters create USER_CLUSTER_NAME \
  --CLUSTER_BASICS_FLAGS \
  --NETWORKING_FLAGS \
  --LOAD_BALANCER_FLAGS \
  --CONTROL_PLANE_FLAGS \
  --disable-aag-config \
  --disable-vsphere-csi

Specify the following if needed:

  • Optional: --disable-aag-config: If you don't include this flag the VMware Distributed Resource Scheduler (DRS) anti-affinity rules are automatically created for your user cluster's nodes, causing them to be spread across at least 3 physical hosts in your data center. Make sure that your vSphere environment meets the requirements. If your cluster doesn't meet the requirements, include this flag.

  • Optional: --disable-vsphere-csi: If you don't include this flag, the vSphere Container Storage Interface (CSI) components are deployed in the user cluster. The CSI driver runs in a native Kubernetes cluster deployed in vSphere to provision persistent volumes on vSphere storage. For more information, see Using the vSphere Container Storage Interface driver. If you don't want to deploy the CSI components, include this flag.

Create the cluster and node pools

Your cluster must have at least one node pool for worker nodes. A node pool is a template for the groups of worker nodes created in this cluster.

In the console, you configure at least one node pool (or accept the default values) and then create the cluster. You can add additional node pools after the cluster is created. With the gcloud CLI, you create the cluster first and then add one or more node pools to the newly-created cluster.

Console

Your cluster will be created with at least one node pool. A node pool is a template for the groups of nodes created in this cluster. For more information, see Creating and managing node pools .

  1. In the Node pool defaults section, complete the following:

    1. Enter the Node pool name or accept "default-pool" as the name.
    2. Enter the number of vCPUs for each node in the pool (minimum 4 per user cluster worker).
    3. Enter the memory size in mebibytes (MiB) for each node in the pool (minimum 8192 MiB per user cluster worker node and must be a multiple of 4).
    4. In the Nodes field, enter the number of nodes in the pool (minimum of 3). If you entered static IP addresses for the Node IPs in the Networking section, make sure that you entered enough IP addresses to accommodate these user cluster nodes.
    5. Select the OS image type: Ubuntu, Ubuntu Containerd, or COS.
    6. Enter the Boot disk size in gibibytes (GiB) (minimum 40 GiB).
    7. If you are using MetalLB as the load balancer, MetalLB must be enabled in at least one node pool. Either leave Use this node pool for MetalLB load balancing selected, or add another node pool to use for MetalLB.
  2. In the Node pool metadata (optional) section, if you want to add Kubernetes labels and taints, do the following:

    1. Click + Add Kubernetes Labels. Enter the Key and Value for the label. Repeat as needed.
    2. Click + Add Taint. Enter the Key, Value, and Effect for the taint. Repeat as needed.
  3. Click Verify and Complete to create the user cluster. It takes 15 minutes or more to create the user cluster. The console displays status messages as it verifies the settings and creates the cluster in your data center.

    If an error is encountered verifying the settings, the console displays an error message that should be clear enough for you to fix the configuration issue and try again to create the cluster.

    For more information about possible errors and how to fix them, see Troubleshoot user cluster creation in the Google Cloud console.

gcloud CLI

Before running the gcloud command to create the cluster, you might want to include --validate-only to validate the configuration that you specified in the flags to the gcloud command. When you are ready to create the cluster, remove this flag and run the command. It takes about 15 minutes or more to create the user cluster.

After the cluster is created, you need to create at least one node pool before deploying workloads.

gcloud alpha container vmware node-pools create NODE_POOL_NAME \
--cluster=USER_CLUSTER_NAME  \
--project=FLEET_HOST_PROJECT_ID \
--location=REGION \
--image-type=IMAGE_TYPE  \
--boot-disk-size=BOOT_DISK_SIZE \
--cpus=vCPUS \
--memory=MEMORY \
--replicas=NODES \
--enable-load-balancer

Replace the following:

  • NODE_POOL_NAME: A name of your choice for the node pool. The name must:

    • contain at most 40 characters
    • contain only lowercase alphanumeric characters or a hyphen (-)
    • start with an alphabetic character
    • end with an alphanumeric character
  • USER_CLUSTER_NAME: The name of the newly-created user cluster.

  • FLEET_HOST_PROJECT_ID: The ID of the project that the cluster is registered to.

  • REGION: The Google Cloud location that you specified when you created the cluster.

  • IMAGE_TYPE: The type of OS image to run on the VMs in the node pool. Set to one of the follow: ubuntu_containerd or cos.

  • BOOT_DISK_SIZE: The size of the boot disk in gibibytes (GiB) for each node in the pool. The minimum is 40 GiB.

  • vCPUs: The number of vCPUs for each node in the node pool. The minimum is 4.

  • MEMORY: The memory size in mebibytes (MiB) for each node in the pool. The minimum is 8192 MiB per user cluster worker node and the value must be a multiple of 4.

  • NODES: The number of nodes in the node pool. The minimum is 3.

  • If you are using MetalLB as the load balancer, optionally, include --enable-load-balancer if you want to allow the MetalLB speaker to run on the nodes in the pool. MetalLB must be enabled in at least one node pool. If you don't include this flag, you must create another node pool to use for MetalLB.

For information about optional flags, see Add a node pool.

Connect to the user cluster

When you create a user cluster in the console, the cluster is configured with the Kubernetes role-based access control (RBAC) policies so that you can log in to the cluster using your Google Cloud identity. When you create a user cluster with the gcloud CLI, by default you are granted these RBAC policies if you don't include the --admin-users flag. If you include --admin-users to designate another user as an administrator, you override the default and you need to include both your email address and the email address of the other administrator. For more information about the required IAM and RBAC policies, see Set up Google identity authentication.

All clusters have a canonical endpoint. The endpoint exposes the Kubernetes API server that kubectl and other services use to communicate with your cluster control plane over TCP port 443. This endpoint is not accessible on the public internet. If you have access to your cluster's private endpoint through your VPC, you can connect directly to the private endpoint and generate a kubeconfig file directly. Otherwise, you can use the Connect gateway. In this case kubectl uses Connect, which then securely forwards the traffic to the private endpoint on your behalf.

To access the user cluster from the command line, you need a kubeconfig file. There are two ways to get a kubeconfig file:

  • Use the Connect gateway to access the cluster from a computer that has the Google Cloud CLI installed on it. The Connect gateway lets you easily and securely manage your clusters. For more information, see Connecting to registered clusters with the Connect gateway.

  • For direct access to private endpoints, create a kubeconfig file on your admin workstation and manage the cluster from your admin workstation.

Be sure to wait until the console indicates that user cluster status is healthy.

Connect gateway

  1. Either initialize the gcloud CLI for use with the fleet host project, or run the following commands to log in with your Google account, set your fleet host project as the default, and update components:

    gcloud auth login
    gcloud config set project FLEET_HOST_PROJECT_ID
    gcloud components update
    
  2. Fetch the cluster credentials used to interact with Connect gateway. In the following command, replaceMEMBERSHIP_NAME with your cluster's name. In Anthos clusters on VMware, the membership name is the same as the cluster name.

    gcloud container fleet memberships get-credentials MEMBERSHIP_NAME
    

    This command returns a special Connect gateway-specific kubeconfig that lets you connect to the cluster through the gateway.

    After you have the necessary credentials, you can run commands using kubectl as you normally would for any Kubernetes cluster, and you don't need to specify the name of the kubeconfig file, for example:

    kubectl get namespaces
    

Admin workstation

To create the user cluster kubeconfig file on your admin workstation, run the following command to save a new kubeconfig file for the user cluster locally. Replace the following:

  • CLUSTER_NAME: the name of the newly-created user cluster
  • ADMIN_CLUSTER_KUBECONFIG: the path to the admin cluster kubeconfig file
  • USER_CLUSTER_KUBECONFIG: the name of the user cluster kubeconfig file that the command outputs
kubectl get secret admin \
  --kubeconfig ADMIN_CLUSTER_KUBECONFIG \
  -n CLUSTER_NAME \
  -o=jsonpath='{.data.admin\.conf}' | base64 -d > USER_CLUSTER_KUBECONFIG

After the file has been saved, you can begin accessing the user cluster using kubectl on the admin workstation, for example:

kubectl --kubeconfig USER_CLUSTER_KUBECONFIG get namespaces

What's next