Sole-tenant nodes

This page describes sole-tenant nodes. For information about how to provision VMs on sole-tenant nodes, see Provisioning VMs on sole-tenant nodes.

Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Use sole-tenant nodes to keep your VMs physically separated from VMs in other projects, or to group your VMs together on the same host hardware. VMs running on sole-tenant nodes can use the same Compute Engine features as other VMs, including transparent scheduling and block storage, but with an added layer of hardware isolation. To give you full control over the VMs on the physical server, each sole-tenant node maintains a one-to-one mapping to the physical server that is backing the node.

Sole-tenancy is appropriate for specific types of workloads, for example, gaming workloads with performance requirements might benefit because they are isolated on their own hardware, finance or healthcare workloads might have requirements around security and compliance, and Windows workloads might have requirements related to licensing.

Within a sole-tenant node, you can provision multiple VMs on machine types of various sizes, which lets you efficiently use the underlying resources of the dedicated host hardware. Also, because you aren't sharing the host hardware with other projects, you can meet security or compliance requirements with workloads that require physical isolation from other workloads or VMs. If your workload requires sole tenancy only temporarily, you can modify VM tenancy as necessary.

Sole tenant nodes can help you meet dedicated hardware requirements for Bring Your Own License (BYOL) scenarios that require per-core or per-processor licenses. When you use sole-tenant nodes, you have some visibility into the underlying hardware, which lets you track core and processor usage. To track this usage, Compute Engine reports the ID of the physical server on which a VM is scheduled. Then, by using Cloud Logging, you can view the historical server usage of a VM.

Through a configurable maintenance policy, sole-tenant nodes let you control the behavior of VMs during host maintenance events. The maintenance policy lets you specify whether VMs provisioned on sole-tenant nodes maintain affinity with a specific physical server, or whether VMs are moved within a fixed group of physical servers.

Node templates and node types

A node template is a regional resource that defines the properties for each node in a node group, and any properties that you define on the node template are immutably copied to each node in a node group created from the template. When you create a node template, specify a node type, and optionally specify node affinity labels. You can only specify node affinity labels on a node template; you can't specify node affinity labels on a node group.

The sole-tenant node type, which you must specify for each node template, specifies the total amount of vCPU cores and memory for that specific node. For example, the n1-node-96-624 node type has 96 vCPUs and 624 GB of memory. Nodes of this type can accommodate VMs running on machine types with up to 96 vCPUs and 624 GB of memory. A node type applies to each individual node within a node group, not to the node group as a whole. So, if you create a node group with two nodes that are both of the n1-node-96-624 node type, each node is allocated 96 vCPUs and 624 GB of memory.

Depending on your workload requirements, you might also fill the node with multiple smaller VMs running on machine types of various sizes, including predefined machine types, custom machine types, and machine types with extended memory. When a node is full, you cannot schedule additional instances on that node.

To see a list of the node types available for your project, run the gcloud compute sole-tenancy node-types list command or the nodeTypes.list REST request. There may be times when Compute Engine replaces an older node type with a newer node type. If Compute Engine replaces a node type, you can't create additional node groups from templates that specify the replaced node type. When Compute Engine replaces a node type, you must review and modify any existing node templates that specify the node type that is no longer available.

Node groups and VM provisioning

Sole-tenant node templates define the properties of a node group, and you must create a node template before creating a node group in a Google Cloud zone. When you create a group, specify the maintenance policy for VM instances on the node group, and the number of nodes for the node group. A node group can have zero or more nodes; for example, you can reduce the number of nodes in a node group to zero when you don't need to run any VM instances on nodes in the group, or you can enable the node group autoscaler to manage the size of the node group automatically.

Before provisioning VMs on sole-tenant nodes, you must create a sole-tenant node group. A node group is a homogeneous set of sole-tenant nodes in a specific zone. Node groups can contain multiple VMs running on machine types of various sizes, as long as the machine type has 2 or more vCPUs.

When you create a node group, enable autoscaling so that the size of the group adjusts automatically to meet the requirements of your workload. If your workload requirements are static, you can manually specify the size of the node group.

After creating a node group, you can provision VMs on the group or on a specific node within the group. For further control, use node affinity labels to schedule VMs on any node with matching affinity labels.

After you've provisioned VMs on node groups, and optionally assigned affinity labels to provision VMs on specific node groups or nodes, consider labeling your resources to help manage your VMs. Labels are key-value pairs that can help you categorize your VMs so that you can view them in aggregate for reasons such as billing. For example, you can use labels to mark the role of a VM, its tenancy, the license type, or its location.

Maintenance policies

To handle different workload requirements, you can specify a maintenance policy for VMs when you create a node group. The sole-tenant node maintenance policy lets you configure how and whether VMs on node groups migrate after a maintenance event. The maintenance policy you choose might depend on, for example, your licensing or compliance requirements, or, you might want to choose a configuration that lets you limit usage of physical servers. With all of these maintenance policies, your VMs remain on dedicated hardware. The following list describes these maintenance policies:

  • Default: VMs on nodes groups configured with this policy follow traditional maintenance behavior for non-sole-tenant VMs. That is, depending on the on-host maintenance setting of the VM's host, VMs live migrate to a new sole-tenant node in the node group before a host maintenance event, and this new sole-tenant node will only run the customer's VMs. This setting doesn't restrict migration of VMs to within a fixed pool of physical servers, and is recommended for general workloads without physical server requirements and that do not require existing licenses.

  • Restart in-place: VMs terminate and restart on the same physical host after a maintenance event. Consider using this policy if your workloads don't require live migration, can tolerate approximately one hour of downtime every 4 to 6 weeks for a host maintenance event, and your VMs must maintain a physical server affinity to a single host.

  • Migrate within node group: Depending on the on-host maintenance setting of the VM, VMs live migrate to another node in the node group before a host maintenance event. Unlike the default setting, these migrations occur within a fixed set of physical servers in order to help limit the number of unique physical servers used by the VM. To ensure there is sufficient capacity for migrating VMs, Compute Engine reserves 1 node for every 20 sole-tenant nodes. Consider using this policy if you must live migrate your workloads because they are not tolerant of downtime, your workloads have core- or processor-based licensing requirements, and you are willing to provision additional sole-tenant nodes.

Hardware failure

Rarely, a server might experience a critical hardware failure. If this happens, Compute Engine retires the physical server and its unique identifier, revokes access to the physical server, assigns a replacement node with a new unique identifier, and moves your VMs onto the replacement node. Depending on the configuration of your sole-tenant nodes, Compute Engine might restart your VMs.

Node affinity and anti-affinity

Sole-tenant nodes ensure that your VMs do not share host hardware with VMs from other projects. However, you still might want to group several workloads together on the same sole-tenant node or isolate your workloads from one another on different nodes. For example, to help meet some compliance requirements, you might need to use affinity labels to separate sensitive workloads from non-sensitive workloads.

When you create a VM, you request sole-tenancy by specifying node affinity or anti-affinity, referencing one or more node affinity labels. You specify custom node affinity labels when you create a node template, and Compute Engine automatically includes some default affinity labels on each node. By specifying affinity when you create a VM, you can schedule VMs together on a specific node or nodes in a node group. By specifying anti-affinity when you create a VM, you can ensure that certain VMs are not scheduled together on the same node or nodes in a node group.

Node affinity labels are key-value pairs assigned to nodes, and are inherited from a node template. Affinity labels let you:

  • Control how individual VM instances are assigned to nodes.
  • Control how VM instances created from a template, such as those created by a managed instance group, are assigned to nodes.
  • Group sensitive VM instances on specific nodes or node groups, separate from other VMs.

Default affinity labels

Compute Engine assigns two default affinity labels to each node:

  • A label for the node group name:
    • Key: compute.googleapis.com/node-group-name
    • Value: Name of the node group.
  • A label for the node name:
    • Key: compute.googleapis.com/node-name
    • Value: Name of the individual node.

Custom affinity labels

You can create custom node affinity labels when you create a node template. These affinity labels are assigned to all nodes in node groups created from the node template. You can't add more custom affinity labels to nodes in a node group after the node group has been created.

For information about how to use affinity labels, see Configuring node affinity.

Pricing

To help you to minimize the cost of your sole-tenant nodes, Compute Engine provides committed use discounts and sustained use discounts. Also, because you are already billed for the vCPU and memory of your sole-tenant nodes, you do not pay extra for the VMs on your sole-tenant nodes.

Availability

Sole-tenant nodes are available in select zones. To ensure high-availability, schedule VMs on sole-tenant nodes in different zones.

Restrictions

What's next