This document gives an example of how to configure a Google Distributed Cloud cluster on
to use
VM-Host affinity.
VM-Host Group affinity is one of the mechanisms that Google Distributed Cloud provides
to ensure high availability. With VM-Host Group affinity, you create groups of
physical ESXi hosts. Then you configure your cluster to associate VM groups with
host groups.
For example, you could configure all VMs in one node pool to run on a particular
host group. And you could configure all VMs in a second node pool to run on a
different host group. You could then treat each node pool as a failure domain.
To differentiate the failure domains, you could add labels to the VMs in
the various node pools.
Before you begin
For this exercise, you need to have at least six ESXi hosts in your vSphere
environment.
Create host groups
Create two or more host DRS groups in your vSphere environment. For this
exercise, two host groups with three hosts each would be appropriate. For
instructions, see
Create a Host DRS Group.
Create a user cluster
This section gives an example of how to create a user cluster that uses VM-Host
Group affinity. The cluster in this example uses Controlplane V2. The cluster has a
high-availability control plane, so there are three control-plane nodes. In
addition to the control-plane nodes, there are six worker nodes: three in one
node pool and three in a second node pool. All nodes use static IP addresses.
These are the important points to understand in the preceding example:
The static IP addresses for the worker nodes are specified in an IP block
file. The IP block file has seven addresses even though there are only six
worker nodes. The extra IP address is needed during cluster upgrade, update,
and auto repair.
The static IP addresses for the three control-plane nodes are specified in the
network.controlPlaneIPBlock section of the user cluster configuration file.
There is no need for an extra IP address in this block.
The masterNode.replicas field is set to 3, so there will be three
control-plane nodes.
A cluster controller will create a VM DRS group that has the three nodes in
the worker-pool-1 node pool. A controller will also create a
VM-Host affinity rule
that ensures nodes in worker-pool-1 will run on hosts that are in
hostgroup-1. The nodes in worker-pool-1 have the label
failuredomain: "failuredomain-1"
A cluster controller will create a VM DRS group that has the three nodes in
the worker-pool-2 node pool. A controller will also create a
VM-Host affinity rule that ensures nodes in worker-pool-2 will run on hosts
that are in hostgroup-2. The nodes in worker-pool-2 have the label
failuredomain: "failuredomain-2"
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThis document explains how to configure a Google Distributed Cloud cluster to use VM-Host Group affinity, a mechanism that ensures high availability by associating groups of virtual machines with groups of physical ESXi hosts.\u003c/p\u003e\n"],["\u003cp\u003eTo set up VM-Host Group affinity, you must first create two or more host DRS groups in your vSphere environment, each with multiple ESXi hosts.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a user cluster with Controlplane V2, you specify two node pools for worker nodes and assign each node pool to a different host group using the \u003ccode\u003ehostgroups\u003c/code\u003e field in the configuration file.\u003c/p\u003e\n"],["\u003cp\u003eThe configuration file must include static IP addresses for worker nodes in an IP block file and for control-plane nodes in the \u003ccode\u003enetwork.controlPlaneIPBlock\u003c/code\u003e section, including an extra IP address in the worker nodes IP block file for cluster upgrades and repairs.\u003c/p\u003e\n"],["\u003cp\u003eEach node pool has a specified \u003ccode\u003ereplicas\u003c/code\u003e count and is labeled to indicate its failure domain, and the cluster controller will automatically create VM DRS groups and VM-Host affinity rules to ensure nodes run on their designated host groups.\u003c/p\u003e\n"]]],[],null,["# Configure VM-Host Group affinity\n\n\u003cbr /\u003e\n\nThis document gives an example of how to configure a Google Distributed Cloud cluster on\nto use\n[VM-Host affinity](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-2FB90EF5-7733-4095-8B66-F10D6C57B820.html).\n\nVM-Host Group affinity is one of the mechanisms that Google Distributed Cloud provides\nto ensure high availability. With VM-Host Group affinity, you create groups of\nphysical ESXi hosts. Then you configure your cluster to associate VM groups with\nhost groups.\n\nFor example, you could configure all VMs in one node pool to run on a particular\nhost group. And you could configure all VMs in a second node pool to run on a\ndifferent host group. You could then treat each node pool as a failure domain.\nTo differentiate the failure domains, you could add labels to the VMs in\nthe various node pools.\n\nBefore you begin\n----------------\n\nFor this exercise, you need to have at least six ESXi hosts in your vSphere\nenvironment.\n\nCreate host groups\n------------------\n\nCreate two or more host DRS groups in your vSphere environment. For this\nexercise, two host groups with three hosts each would be appropriate. For\ninstructions, see\n[Create a Host DRS Group](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-D89CEBB5-5C43-4FAC-83D1-E7CB576FF86D.html).\n\nCreate a user cluster\n---------------------\n\nThis section gives an example of how to create a user cluster that uses VM-Host\nGroup affinity. The cluster in this example uses Controlplane V2. The cluster has a\nhigh-availability control plane, so there are three control-plane nodes. In\naddition to the control-plane nodes, there are six worker nodes: three in one\nnode pool and three in a second node pool. All nodes use static IP addresses.\n\nStart by following the instructions in\n[Create a user cluster (Controlplane V2)](/anthos/clusters/docs/on-prem/1.16/how-to/create-user-cluster-controlplane-v2).\n\nAs you fill in your user cluster configuration file:\n\n- Specify two node pools for worker nodes. For each node pool, set `replicas` to `3`, and provide the name of an existing host group.\n\nExample configuration file\n--------------------------\n\nHere is an example of an IP block file and a portion of a user cluster\nconfiguration file.\n\n#### user-ipblock.yaml\n\n```\nblocks:\n - netmask: 255.255.255.0\n gateway: 172.16.21.1\n ips:\n - ip: 172.16.21.2\n - ip: 172.16.21.3\n - ip: 172.16.21.4\n - ip: 172.16.21.5\n - ip: 172.16.21.6\n - ip: 172.16.21.7\n - ip: 172.16.21.8\n\n```\n\n#### user-cluster-yaml\n\n```\napiVersion: v1\nkind: UserCluster\n...\nnetwork:\n hostConfig:\n dnsServers:\n - \"203.0.113.2\"\n - \"198.51.100.2\"\n ntpServers:\n - \"216.239.35.4\"\n ipMode:\n type: \"static\"\n ipBlockFilePath: \"user-ipblock.yaml\"\n controlPlaneIPBlock:\n netmask: \"255.255.255.0\"\n gateway: \"172.16.21.1\"\n ips:\n - ip: \"172.16.21.9\"\n hostname: \"cp-vm-1\"\n - ip: \"172.16.21.10\"\n hostname: \"cp-vm-2\"\n - ip: \"172.16.21.11\"\n hostname: \"cp-vm-3\"\nloadBalancer:\n vips:\n controlPlaneVIP: \"172.16.21.40\"\n ingressVIP: \"172.16.21.30\"\n kind: MetalLB\n metalLB:\n addressPools:\n - name: \"address-pool-1\"\n addresses:\n - \"172.16.21.30-172.16.21.39\"\n...\nenableControlplaneV2: true\nmasterNode:\n cpus: 4\n memoryMB: 8192\n replicas: 3\nnodePools:\n- name: \"worker-pool-1\"\n enableLoadBalancer: true\n replicas: 3\n vsphere:\n hostgroups:\n - \"hostgroup-1\"\n labels:\n failuredomain: \"failuredomain-1\"\n- name: \"worker-pool-2\"\n replicas: 3\n vsphere:\n hostgroups:\n - \"hostgroup-2\"\n labels:\n failuredomain: \"failuredomain-2\"\n...\n```\n\nThese are the important points to understand in the preceding example:\n\n- The static IP addresses for the worker nodes are specified in an IP block\n file. The IP block file has seven addresses even though there are only six\n worker nodes. The extra IP address is needed during cluster upgrade, update,\n and auto repair.\n\n- The static IP addresses for the three control-plane nodes are specified in the\n `network.controlPlaneIPBlock` section of the user cluster configuration file.\n There is no need for an extra IP address in this block.\n\n- The `masterNode.replicas` field is set to `3`, so there will be three\n control-plane nodes.\n\n- A cluster controller will create a VM DRS group that has the three nodes in\n the `worker-pool-1` node pool. A controller will also create a\n [VM-Host affinity rule](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-2FB90EF5-7733-4095-8B66-F10D6C57B820.html)\n that ensures nodes in `worker-pool-1` will run on hosts that are in\n `hostgroup-1`. The nodes in `worker-pool-1` have the label\n `failuredomain: \"failuredomain-1\"`\n\n- A cluster controller will create a VM DRS group that has the three nodes in\n the `worker-pool-2` node pool. A controller will also create a\n VM-Host affinity rule that ensures nodes in `worker-pool-2` will run on hosts\n that are in `hostgroup-2`. The nodes in `worker-pool-2` have the label\n `failuredomain: \"failuredomain-2\"`\n\nContinue creating your user cluster as described in\n[Create a user cluster (Controlplane V2)](/anthos/clusters/docs/on-prem/1.16/how-to/create-user-cluster-controlplane-v2)."]]