This document gives an example of how to configure a GKE Enterprise cluster on
VMware to use
VM-Host affinity.
VM-Host Group affinity is one of the mechanisms that Google Distributed Cloud provides
to ensure high availability. With VM-Host Group affinity, you create groups of
physical ESXi hosts. Then you configure your cluster to associate VM groups with
host groups.
For example, you could configure all VMs in one node pool to run on a particular
host group. And you could configure all VMs in a second node pool to run on a
different host group. You could then treat each node pool as a failure domain.
To differentiate the failure domains, you could add labels to the VMs in
the various node pools.
Before you begin
For this exercise, you need to have at least six ESXi hosts in your vSphere
environment.
Create host groups
Create two or more host DRS groups in your vSphere environment. For this
exercise, two host groups with three hosts each would be appropriate. For
instructions, see
Create a Host DRS Group.
Create a user cluster
This section gives an example of how to create a user cluster that uses VM-Host
Group affinity. The cluster in this example uses Controlplane V2. The cluster has a
high-availability control plane, so there are three control-plane nodes. In
addition to the control-plane nodes, there are six worker nodes: three in one
node pool and three in a second node pool. All nodes use static IP addresses.
These are the important points to understand in the preceding example:
The static IP addresses for the worker nodes are specified in an IP block
file. The IP block file has seven addresses even though there are only six
worker nodes. The extra IP address is needed during cluster upgrade, update,
and auto repair.
The static IP addresses for the three control-plane nodes are specified in the
network.controlPlaneIPBlock section of the user cluster configuration file.
There is no need for an extra IP address in this block.
The masterNode.replicas field is set to 3, so there will be three
control-plane nodes.
A cluster controller will create a VM DRS group that has the three nodes in
the worker-pool-1 node pool. A controller will also create a
VM-Host affinity rule
that ensures nodes in worker-pool-1 will run on hosts that are in
hostgroup-1. The nodes in worker-pool-1 have the label
failuredomain: "failuredomain-1"
A cluster controller will create a VM DRS group that has the three nodes in
the worker-pool-2 node pool. A controller will also create a
VM-Host affinity rule that ensures nodes in worker-pool-2 will run on hosts
that are in hostgroup-2. The nodes in worker-pool-2 have the label
failuredomain: "failuredomain-2"
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[[["\u003cp\u003eThis document demonstrates configuring a GKE Enterprise cluster on VMware to utilize VM-Host Group affinity for enhanced high availability.\u003c/p\u003e\n"],["\u003cp\u003eVM-Host Group affinity allows grouping physical ESXi hosts and associating VM groups with these host groups, enabling the designation of node pools as failure domains.\u003c/p\u003e\n"],["\u003cp\u003eTo implement this, at least six ESXi hosts are required, divided into two or more host DRS groups within the vSphere environment.\u003c/p\u003e\n"],["\u003cp\u003eCreating a user cluster with Controlplane V2 involves specifying node pools, setting replicas, and linking each pool to a designated host group in the configuration file.\u003c/p\u003e\n"],["\u003cp\u003eThe example provides a configuration for two node pools, each with three worker nodes, along with three control-plane nodes, and static IP address management using an IP block file and the configuration file, as well as the use of labels to differentiate failure domains.\u003c/p\u003e\n"]]],[],null,["# Configure VM-Host Group affinity\n\n\u003cbr /\u003e\n\n|\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis document gives an example of how to configure a GKE Enterprise cluster on\nVMware to use\n[VM-Host affinity](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-2FB90EF5-7733-4095-8B66-F10D6C57B820.html).\n\nVM-Host Group affinity is one of the mechanisms that Google Distributed Cloud provides\nto ensure high availability. With VM-Host Group affinity, you create groups of\nphysical ESXi hosts. Then you configure your cluster to associate VM groups with\nhost groups.\n\nFor example, you could configure all VMs in one node pool to run on a particular\nhost group. And you could configure all VMs in a second node pool to run on a\ndifferent host group. You could then treat each node pool as a failure domain.\nTo differentiate the failure domains, you could add labels to the VMs in\nthe various node pools.\n\nBefore you begin\n----------------\n\nFor this exercise, you need to have at least six ESXi hosts in your vSphere\nenvironment.\n\nCreate host groups\n------------------\n\nCreate two or more host DRS groups in your vSphere environment. For this\nexercise, two host groups with three hosts each would be appropriate. For\ninstructions, see\n[Create a Host DRS Group](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-D89CEBB5-5C43-4FAC-83D1-E7CB576FF86D.html).\n\nCreate a user cluster\n---------------------\n\nThis section gives an example of how to create a user cluster that uses VM-Host\nGroup affinity. The cluster in this example uses Controlplane V2. The cluster has a\nhigh-availability control plane, so there are three control-plane nodes. In\naddition to the control-plane nodes, there are six worker nodes: three in one\nnode pool and three in a second node pool. All nodes use static IP addresses.\n\nStart by following the instructions in\n[Create a user cluster (Controlplane V2)](/anthos/clusters/docs/on-prem/1.15/how-to/create-user-cluster-controlplane-v2).\n\nAs you fill in your user cluster configuration file:\n\n- Specify two node pools for worker nodes. For each node pool, set `replicas` to `3`, and provide the name of an existing host group.\n\nExample configuration file\n--------------------------\n\nHere is an example of an IP block file and a portion of a user cluster\nconfiguration file.\n\n#### user-ipblock.yaml\n\n```\nblocks:\n - netmask: 255.255.255.0\n gateway: 172.16.21.1\n ips:\n - ip: 172.16.21.2\n - ip: 172.16.21.3\n - ip: 172.16.21.4\n - ip: 172.16.21.5\n - ip: 172.16.21.6\n - ip: 172.16.21.7\n - ip: 172.16.21.8\n\n```\n\n#### user-cluster-yaml\n\n```\napiVersion: v1\nkind: UserCluster\n...\nnetwork:\n hostConfig:\n dnsServers:\n - \"203.0.113.2\"\n - \"198.51.100.2\"\n ntpServers:\n - \"216.239.35.4\"\n ipMode:\n type: \"static\"\n ipBlockFilePath: \"user-ipblock.yaml\"\n controlPlaneIPBlock:\n netmask: \"255.255.255.0\"\n gateway: \"172.16.21.1\"\n ips:\n - ip: \"172.16.21.9\"\n hostname: \"cp-vm-1\"\n - ip: \"172.16.21.10\"\n hostname: \"cp-vm-2\"\n - ip: \"172.16.21.11\"\n hostname: \"cp-vm-3\"\nloadBalancer:\n vips:\n controlPlaneVIP: \"172.16.21.40\"\n ingressVIP: \"172.16.21.30\"\n kind: MetalLB\n metalLB:\n addressPools:\n - name: \"address-pool-1\"\n addresses:\n - \"172.16.21.30-172.16.21.39\"\n...\nenableControlplaneV2: true\nmasterNode:\n cpus: 4\n memoryMB: 8192\n replicas: 3\nnodePools:\n- name: \"worker-pool-1\"\n enableLoadBalancer: true\n replicas: 3\n vsphere:\n hostgroups:\n - \"hostgroup-1\"\n labels:\n failuredomain: \"failuredomain-1\"\n- name: \"worker-pool-2\"\n replicas: 3\n vsphere:\n hostgroups:\n - \"hostgroup-2\"\n labels:\n failuredomain: \"failuredomain-2\"\n...\n```\n\nThese are the important points to understand in the preceding example:\n\n- The static IP addresses for the worker nodes are specified in an IP block\n file. The IP block file has seven addresses even though there are only six\n worker nodes. The extra IP address is needed during cluster upgrade, update,\n and auto repair.\n\n- The static IP addresses for the three control-plane nodes are specified in the\n `network.controlPlaneIPBlock` section of the user cluster configuration file.\n There is no need for an extra IP address in this block.\n\n- The `masterNode.replicas` field is set to `3`, so there will be three\n control-plane nodes.\n\n- A cluster controller will create a VM DRS group that has the three nodes in\n the `worker-pool-1` node pool. A controller will also create a\n [VM-Host affinity rule](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-2FB90EF5-7733-4095-8B66-F10D6C57B820.html)\n that ensures nodes in `worker-pool-1` will run on hosts that are in\n `hostgroup-1`. The nodes in `worker-pool-1` have the label\n `failuredomain: \"failuredomain-1\"`\n\n- A cluster controller will create a VM DRS group that has the three nodes in\n the `worker-pool-2` node pool. A controller will also create a\n VM-Host affinity rule that ensures nodes in `worker-pool-2` will run on hosts\n that are in `hostgroup-2`. The nodes in `worker-pool-2` have the label\n `failuredomain: \"failuredomain-2\"`\n\nContinue creating your user cluster as described in\n[Create a user cluster (Controlplane V2)](/anthos/clusters/docs/on-prem/1.15/how-to/create-user-cluster-controlplane-v2)."]]