Provisioning VMs on sole-tenant nodes


This page describes how to provision VMs on sole-tenant nodes, which are physical servers that run VMs only from a single project. Before provisioning VMs on sole-tenant nodes, read the sole-tenant node overview.

Provisioning a VM on a sole-tenant node requires the following:

  1. Creating a sole-tenant node template. The sole-tenant node template specifies uniform properties for all of the sole-tenant nodes in a sole-tenant node group.

  2. Creating a sole-tenant node group using the previously created sole-tenant node template.

  3. Creating VMs and provisioning them on a sole-tenant node group.

Before you begin

Creating a sole-tenant node template

Sole-tenant node templates are regional resources that specify properties for sole-tenant node groups. You must create a node template before you create a node group.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Node templates.

  3. Click Create node template to begin creating a sole-tenant node template.

  4. Specify a Name for the node template.

  5. Specify a Region to create the node template in. You can use the node template to create node groups in any zone of this region.

  6. Specify the Node type for each sole-tenant node in the node group to create based on this node template.

  7. Optionally add Node affinity labels. Affinity labels let you logically group nodes and node groups, and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti- affinity.

  8. Click Create to finish creating your node template.

gcloud

Use the gcloud compute sole-tenancy node-templates create command to create a node template:

gcloud compute sole-tenancy node-templates create TEMPLATE_NAME \
  --node-type=NODE_TYPE \
  [--node-affinity-labels=AFFINITY_LABELS \]
  [--region=REGION]

Replace the following:

  • TEMPLATE_NAME: the name for the new node template.

  • NODE_TYPE: the node type for sole-tenant nodes created based on this template. Use the gcloud compute sole-tenancy node-types list command to get a list of the node types available in each zone.

  • AFFINITY_LABELS: the keys and values, [KEY=VALUE,...], for affinity labels. Affinity labels let you logically group nodes and node groups and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • REGION: the region to create the node template in. You can use this template to create node groups in any zone of this region.

GPUs and local SSDs on sole-tenant nodes (Preview)

Sole-tenant node support for GPUs and local SSDs is in Preview.

  • To create a sole-tenant node that supports GPUs, specify support for GPUs in the sole-tenant node template by using the beta version of the gcloud tool, and include the --accelerator flag. Later, when you create a VM on a sole-tenant node, you will specify the number of GPUs to attach to the sole-tenant VM.

  • To create a sole-tenant node that supports local SSDs, specify support for local SSDs in the sole-tenant node template by using the beta version of the gcloud tool, and include the --disk flag and set the value of its type parameter equal to local-ssd. Later, when you create a VM on a sole-tenant node, you will specify the interface type for the local SSD.

gcloud beta compute sole-tenancy node-templates create TEMPLATE_NAME \
  --node-type=NODE_TYPE \
  [--node-affinity-labels=AFFINITY_LABELS \]
  [--region=REGION \]
  [--accelerator type=GPU_TYPE,count=GPU_COUNT \]
  [--disk type=local-ssd,count=DISK_COUNT,size=DISK_SIZE]

Replace the following:

  • TEMPLATE_NAME: the name for the new node template.

  • NODE_TYPE: the node type for sole-tenant nodes created based on this template. Use the gcloud compute sole-tenancy node-types list command to get a list of the node types available in each zone.

  • AFFINITY_LABELS: the keys and values, [KEY=VALUE,...], for affinity labels. Affinity labels let you logically group nodes and node groups and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • REGION: the region to create the node template in. You can use this template to create node groups in any zone of this region.

  • GPU_TYPE: the type of GPU for each sole-tenant node created based on this node template. For information on the zonal availability of GPUs, use the gcloud compute accelerator-types list command and choose a zone where the n1 sole-tenant node type is available. Depending on the zonal availability, set to one of:

    • nvidia-tesla-p100
    • nvidia-tesla-p4
    • nvidia-tesla-t4
    • nvidia-tesla-v100
  • GPU_COUNT: the number of GPUs to specify depending on the type of GPU. Set to the value specified for the type of GPU as shown in the following table:

    GPU_TYPE GPU_COUNT
    nvidia-tesla-p100 4
    nvidia-tesla-p4 4
    nvidia-tesla-t4 4
    nvidia-tesla-v100 8
  • DISK_COUNT: number of disks specified by DISK_TYPE. Set to 16 or 24.

  • DISK_SIZE: optional value for the partition size of the local SSD in GB. The only supported partition size is 375, and if you do not set this value it will default to 375.

API

Use the nodeTemplates.insert method to create a node template:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/nodeTemplates

{
  "name": "TEMPLATE_NAME",
  "nodeType": "NODE_TYPE",
  "nodeAffinityLabels": {
    "KEY": "VALUE",
    ...
  }
}

Replace the following:

  • PROJECT_ID: the project ID.

  • REGION: the region to create the node template in. You can use this template to create node groups in any zone of this region.

  • TEMPLATE_NAME: the name for the new node template.

  • NODE_TYPE: the node type for sole-tenant nodes created based on this template. Use the nodeTypes.list method to get a list of the node types available in each zone.

  • KEY: the nodeAffinityLabels value that specifies the key portion of a node affinity label expressed as a key-value pair. Affinity labels let you logically group nodes and node groups, and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • VALUE: the nodeAffinityLabels value that specifies the value portion of a node affinity label key-value pair.

GPUs and local SSDs on sole-tenant nodes (Preview)

Sole-tenant node support for GPUs and local SSDs is in Preview.

  • To create a sole-tenant node that supports GPUs, specify support for GPUs in the sole-tenant node template by using the beta version of the API, and include the accelerators block in the request body. Later, when you create a VM on a sole-tenant node, you will specify the number of GPUs to attach to the sole-tenant VM.

  • To create a sole-tenant node that supports local SSDs, specify support for local SSDs in the sole-tenant node template by using the beta version of the API, including the disks block in the request body, and setting the value of diskType equal to local-ssd. Later, when you create a VM on a sole-tenant node, you will specify the interface type for the local SSD.

POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/regions/REGION/nodeTemplates

{
  "name": "TEMPLATE_NAME",
  "nodeType": "NODE_TYPE",
  "nodeAffinityLabels": {
    "KEY": "VALUE",
    ...
  },
  "accelerators": [
    {
      "acceleratorType": "GPU_TYPE",
      "acceleratorCount": GPU_COUNT
    }
  ],
  "disks": [
    {
      "diskType": "local-ssd",
      "diskSizeGb": DISK_SIZE,
      "diskCount": DISK_COUNT
    }
  ]
}

Replace the following:

  • PROJECT_ID: the project ID.

  • REGION: the region to create the node template in. You can use this template to create node groups in any zone of this region.

  • TEMPLATE_NAME: the name for the new node template.

  • NODE_TYPE: the node type for sole-tenant nodes created based on this template. Use the nodeTypes.list method to get a list of the node types available in each zone.

  • KEY: the nodeAffinityLabels value that specifies the key portion of a node affinity label expressed as a key-value pair. Affinity labels let you logically group nodes and node groups, and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • VALUE: the nodeAffinityLabels value that specifies the value portion of a node affinity label key-value pair.

  • GPU_TYPE: the type of GPU for each sole-tenant node created based on this node template. For information on the zonal availability of GPUs, use the gcloud compute accelerator-types list command and choose a zone where the n1 sole-tenant node type is available. Depending on the zonal availability, set to one of:

    • nvidia-tesla-p100
    • nvidia-tesla-p4
    • nvidia-tesla-t4
    • nvidia-tesla-v100
  • GPU_COUNT: the number of GPUs for each sole-tenant node created based on this node template. Set to the value specified for the type of GPU as shown in the following table:

    GPU_TYPE GPU_COUNT
    nvidia-tesla-p100 4
    nvidia-tesla-p4 4
    nvidia-tesla-t4 4
    nvidia-tesla-v100 8
  • DISK_SIZE: optional value for the partition size of the local SSD in GB. The only supported partition size is 375, and if you do not set this value it will default to 375.

  • DISK_COUNT: number of disks specified by DISK_TYPE. Set to 16 or 24.

Creating a sole-tenant node group

With the previously created sole-tenant node template, create a sole-tenant node group. A sole-tenant node group inherits properties specified by the sole-tenant node template and has additional values that you must specify.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Create node group to begin creating a node group.

  3. Specify a Name for the node group.

  4. Specify the Region for the node group to display the available node templates in that region. You must have a node template in the selected region.

  5. Specify the Zone within the region to create the node group in.

  6. Specify the Node template to create the node group from. The selected node template is applied to the node group.

  7. Choose one of the following for the Autoscaling mode for the node group autoscaler:

    • Don't configure autoscale: Manually manage the size of the node group.

    • Autoscale: Have nodes automatically added to or removed from the node group.

    • Autoscale only out: Add nodes to the node group when extra capacity is required.

  8. Specify the Number of nodes for the group. If you enable the node group autoscaler, either specify a range for the size of the node group, or, specify the number of nodes for the group. You can manually change either value later.

  9. Specify the Maintenance policy for the sole-tenant node group to one of the following values. The maintenance policy lets you configure the behavior of VMs on the node group during host maintenance events. For more information, see Maintenance policies.

    • Default
    • Restart in place
    • Migrate within node group
  10. Click Create to finish creating the node group.

gcloud

Run the gcloud compute sole-tenancy node-groups create command command to create a node group based on a previously created node template:

gcloud compute sole-tenancy node-groups create GROUP_NAME \
  --zone=ZONE \
  --node-template=TEMPLATE_NAME \
  --target-size=TARGET_SIZE \
  --maintenance-policy=MAINTENANCE_POLICY \
  --autoscaler-mode=AUTOSCALER_MODE \
  --min-nodes=MIN_NODES \
  --max-nodes=MAX_NODES

Replace the following:

  • GROUP_NAME: the name for the new node group.

  • ZONE: the zone to create the node group in. This must be the same region as the node template on which you are basing the node group.

  • TEMPLATE_NAME: the name of the node template to use to create this group.

  • TARGET_SIZE: the number of nodes to create in the group.

  • MAINTENANCE_POLICY: the maintenance policy for the node group. For more information, see Maintenance policies. This must be one of the following values:

    • default
    • restart-in-place
    • migrate-within-node-group
  • AUTOSCALER_MODE: the autoscaler policy for the node group. This must be one of:

    • off: manually manage the size of the node group.

    • on: have nodes automatically added to or removed from the node group.

    • only-scale-out: add nodes to the node group when extra capacity is required.

  • MIN_NODES: the minimim size of the node group. The default value is 0 and must be an integer value less than or equal to MAX_NODES.

  • MAX_NODES: the maximum size of the node group. This must be less than or equal to 100 and greater than or equal to MIN_NODES. Required if AUTOSCALER_MODE is not set to off.

API

Use the nodeGroups.insert method to create a node group based on a previously created node template:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups?initialNodeCount=TARGET_SIZE
{ "nodeTemplate": "regions/REGION/nodeTemplates/TEMPLATE_NAME", "name": "GROUP_NAME", "maintenancePolicy": MAINTENANCE_POLICY, "autoscalingPolicy": { "mode": AUTOSCALER_MODE, "minNodes": MIN_NODES, "maxNodes": MAX_NODES }, }

Replace the following:

  • PROJECT_ID: the project ID.

  • ZONE: the zone to create the node group in. This must be in the same region as the node template on which you are basing the node group.

  • TARGET_SIZE: the number of nodes to create in the group.

  • REGION: the region to create the node group in. You must have a node template in the selected region.

  • TEMPLATE_NAME: the name of the node template to use to create this group.

  • GROUP_NAME: the name for the new node group.

  • MAINTENANCE_POLICY: the maintenance policy for the node group. This must be one of the following values:

    • DEFAULT
    • RESTART_IN_PLACE
    • MIGRATE_WITHIN_NODE_GROUP
  • AUTOSCALER_MODE: the autoscaler policy for the node group. This must be one of the following values:

    • OFF: manually manage the size of the node group.

    • ON: have nodes automatically added to or removed from the node group.

    • ONLY_SCALE_OUT: add nodes to the node group when extra capacity is required.

  • MIN_NODES: the minimim size of the node group. The default is 0 and must be an integer value less than or equal to MAX_NODES.

  • MAX_NODES: the maximum size of the node group. This must be less than or equal to 100 and greater than or equal to MIN_NODES. Required if AUTOSCALER_MODE is not set to OFF.

Provisioning a sole-tenant VM

After creating a node group based on a previously created node template, you can provision individual VMs on a sole-tenant node group.

To provision a VM on a specific node or node group that has affinity labels that match those you previously assigned to the node template, follow the standard procedure for creating a VM instance, and assign affinity labels to the VM.

Or, you can use the following procedure to provision a VM on a sole-tenant node from the node group details page. Based on the node group that you provision VMs on, Compute Engine assigns affinity labels.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Node groups.

  3. Click the Name of the node group to provision a VM instance on, and then optionally, to provision a VM on a specific sole-tenant node, click the name of the specific sole-tenant node to provision the VM.

  4. Click Create instance to provision a VM instance on this node group, note the values automatically applied for the Name, Region, and Zone, and modify those values as necessary.

  5. Select a Machine configuration by specifying the Machine family, Series, and Machine type. Choose the Series that corresponds to the sole-tenant node type.

  6. Modify the Boot disk, Firewall, and other settings as necessary.

  7. Click Sole Tenancy, note the automatically assigned Node affinity labels, and use Browse to adjust as necessary.

  8. Click Management, and for On host maintenance, choose one of the following:

    • Migrate VM instance (recommended): VM migrated to another node in the node group during maintenance events.

    • Terminate: VM stopped during maintenance events.

  9. Choose one of the following for the Automatic restart:

    • On (recommended): Automatically restarts VMs if they are stopped for maintenance events.

    • Off: Does not automatically restart VMs after a maintenance event.

  10. Click Create to finish creating your sole-tenant VM.

gcloud

Use the gcloud compute instances create command command to provision a VM on a sole-tenant node group:

gcloud compute instances create VM_NAME \
  [--zone=ZONE \]
  --image-family=IMAGE_FAMILY \
  --image-project=IMAGE_PROJECT \
  --node-group=GROUP_NAME \
  --machine-type=MACHINE_TYPE \
  [--maintenance-policy=MAINTENANCE_POLICY \]
  [--restart-on-failure]

The --restart-on-failure flag indicates whether sole-tenant VMs restart after stopping. This flag is enabled by default. Use --no-restart-on-failure to disable.

Replace the following:

  • VM_NAME: the name of the new sole-tenant VM.

  • ZONE: the zone to provision the sole-tenant VM in.

  • IMAGE_FAMILY: the image family of the image to use to create the VM.

  • IMAGE_PROJECT: the image project of the image family.

  • GROUP_NAME: the name of the node group to provision the VM on.

  • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

GPUs and local SSDs on sole-tenant nodes (Preview)

Sole-tenant node support for GPUs and local SSDs is in Preview.

  • To attach a GPU to a sole-tenant VM, after creating a node group that supports GPUs, use the beta version of the gcloud tool to specify the type of GPU and the number of GPUs to attach to the sole-tenant VM.

  • To attach a local SSD to a sole-tenant VM, after creating a node group that supports local SSD, use the beta version of the gcloud tool to specify the interface type of the local SSD.

gcloud beta compute instances create VM_NAME \
  [--zone=ZONE \]
  --image-family=IMAGE_FAMILY \
  --image-project=IMAGE_PROJECT \
  --node-group=GROUP_NAME \
  --machine-type=MACHINE_TYPE \
  [--maintenance-policy=MAINTENANCE_POLICY \]
  [--accelerator type=GPU_TYPE,count=GPU_COUNT \]
  [--local-ssd interface=SSD_INTERFACE \]
  [--restart-on-failure]

The --restart-on-failure flag indicates whether sole-tenant VMs restart after stopping. This flag is enabled by default. Use --no-restart-on-failure to disable.

Replace the following:

  • VM_NAME: the name of the new sole-tenant VM.

  • ZONE: the zone to provision the sole-tenant VM in.

  • IMAGE_FAMILY: the image family of the image to use to create the VM.

  • IMAGE_PROJECT: the image project of the image family.

  • GROUP_NAME: the name of the node group to provision the VM on.

  • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

  • GPU_TYPE: type of GPU. Set to one of the accelerator types specified when the node template was created.

  • GPU_COUNT: number of GPUs of the total specified by the node template to attach to this VM. Default value is 1.

  • SSD_INTERFACE: type of local SSD interface. You can only set this for instances created from a node template with local SSD support. If you specify this while creating the instance, and the node template does not support local SSD, instance creation fails. Set to nvme if the boot disk image drivers are optimized for NVMe, otherwise set to scsi. Specify this flag and a corresponding value once for each local SSD partition.

API

Use the instances.insert method to provision a VM on a sole-tenant node group:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/VM_ZONE/instances
{ "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "name": "VM_NAME", "scheduling": { "onHostMaintenance": MAINTENANCE_POLICY, "automaticRestart": RESTART_ON_FAILURE, "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } } ] }

Replace the following:

  • PROJECT_ID: the project ID.

  • VM_ZONE: the zone to provision the sole-tenant VM in.

  • MACHINE_TYPE_ZONE: the zone of the machine type.

  • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

  • VM_NAME: the name of the new sole-tenant VM.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

  • RESTART_ON_FAILURE: indicates whether sole-tenant VMs restart after stopping. Default is true.

  • GROUP_NAME: the name of the node group to provision the VM on.

  • NETWORK: the URL of the network resource for this VM.

  • REGION: the region containing the subnetwork for this VM.

  • SUBNETWORK: the URL of the subnetwork resource for this VM.

  • IMAGE_PROJECT: image project of the image family.

  • IMAGE_FAMILY: image family of the image to use to create the VM.

GPUs and local SSDs on sole-tenant nodes (Preview)

Sole-tenant node support for GPUs and local SSDs is in Preview.

  • To attach a GPU to a sole-tenant VM, after creating a node group that supports GPUs, use the beta version of the API and add the guestAccelerators block.

  • To attach a local SSD to a sole-tenant VM, after creating a node group that supports local SSDs, use the beta version of the API to specify the parameters for each local SSD.

POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/VM_ZONE/instances
{ "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "name": "VM_NAME", "scheduling": { "onHostMaintenance": MAINTENANCE_POLICY, "automaticRestart": RESTART_ON_FAILURE, "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "guestAccelerators": [ { "acceleratorType": GPU_TYPE, "acceleratorCount": GPU_COUNT } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } }, { "type":"SCRATCH", "initializeParams":{ "diskType":"zones/LOCAL_SSD_ZONE/diskTypes/local-ssd" }, "autoDelete":true, "interface":"SSD_INTERFACE" } ] }

Replace the following:

  • PROJECT_ID: the project ID.

  • VM_ZONE: the zone to provision the sole-tenant VM in.

  • MACHINE_TYPE_ZONE: the zone of the machine type.

  • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

  • VM_NAME: the name of the new sole-tenant VM.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

  • RESTART_ON_FAILURE: indicates whether sole-tenant VMs restart after stopping. Default is true.

  • GROUP_NAME: the name of the node group to provision the VM on.

  • NETWORK: the URL of the network resource for this VM.

  • REGION: the region containing the subnetwork for this VM.

  • SUBNETWORK: the URL of the subnetwork resource for this VM.

  • GPU_TYPE: the type of GPU. Set to one of the accelerator types specified when the node template was created.

  • GPU_COUNT: the number of GPUs of the total specified by the node template to attach to this VM. Default value is 1.

  • IMAGE_PROJECT: image project of the image family.

  • IMAGE_FAMILY: image family of the image to use to create the VM.

  • LOCAL_SSD_ZONE: the local SSD's zone.

  • SSD_INTERFACE: the type of local SSD interface. Set to NVME if the boot disk image drivers are optimized for NVMe, otherwise set to SCSI.

Provisioning a group of sole-tenant VMs

Managed instance groups (MIGs) let you provision a group of identical sole-tenant VMs. affinity labels let you specify the sole-tenant node or node group on which to provision the group of sole-tenant VMs.

gcloud

  1. Use the gcloud compute instance-templates create command to create a managed instance group template for a group of instances to create on a sole-tenant node group:

    gcloud compute instance-templates create INSTANCE_TEMPLATE \
      --machine-type=MACHINE_TYPE \
      --image-project=IMAGE_PROJECT \
      --image-family=IMAGE_FAMILY \
      --node-group=GROUP_NAME
    

    Replace the following:

    • INSTANCE_TEMPLATE: the name for the new instance template.

    • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

    • IMAGE_PROJECT: the image project of the image family.

    • IMAGE_FAMILY: the image family of the image to use to create the VM.

    • GROUP_NAME: the name of the node group to provision the VM on.

    GPUs and local SSDs on sole-tenant nodes (Preview)

    Sole-tenant node support for GPUs and local SSDs is in Preview.

    • To create a managed instance group on a sole-tenant node group from a sole-tenant node template that specifies GPU support for the node group, use the beta version of the gcloud tool to specify the type of GPU for the instances specified by the instance template.

    • To create a managed instance group on a sole-tenant node group from a sole-tenant node template that specifies local SSD support for the node group, use the beta version of the gcloud tool to specify the type of local SSD interface for the instances specified by the instance template.

    gcloud beta compute instance-templates create INSTANCE_TEMPLATE \
      --machine-type=MACHINE_TYPE \
      --image-project=IMAGE_PROJECT \
      --image-family=IMAGE_FAMILY \
      --node-group=GROUP_NAME \
      [--accelerator type=GPU_TYPE,count=GPU_COUNT \]
      [--local-ssd interface=SSD_INTERFACE]
    

    Replace the following:

    • INSTANCE_TEMPLATE: the name for the new instance template.

    • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

    • IMAGE_PROJECT: the image project of the image family.

    • IMAGE_FAMILY: the image family of the image to use to create the VM.

    • GROUP_NAME: the name of the node group to provision the VM on.

    • GPU_TYPE: type of GPU. Set to one of the accelerator types specified when the node template was created.

    • GPU_COUNT: number of GPUs of the total specified by the node template to attach to this VM. Default value is 1.

    • SSD_INTERFACE: type of local SSD interface. Set to nvme if the boot disk image drivers are optimized for NVMe, otherwise set to scsi. Specify this flag and a corresponding value once for each local SSD partition.

  2. Use the gcloud compute instance-groups managed create command command to create a managed instance group within your sole-tenant node group:

    gcloud compute instance-groups managed create INSTANCE_GROUP_NAME \
      --size=SIZE \
      --template=INSTANCE_TEMPLATE \
      --zone=ZONE
    

    Replace the following:

    • INSTANCE_GROUP_NAME: the name for this instance group.

    • SIZE: the number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group. Use the managed instance group autoscaler to automatically manage the size of managed instance groups.

    • INSTANCE_TEMPLATE: the name of the instance template to use to create this group. The template must have a node affinity label pointing to the appropriate node group.

    • ZONE: the zone to create the managed instance group in.

API

  1. Use the instanceTemplates.insert method to create a managed instance group template within your sole-tenant node group:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/TEMPLATE_ZONE/instance-templates
    { "name": "INSTANCE_TEMPLATE", "properties": { "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "scheduling": { "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } } ] } }

    Replace the following:

    • PROJECT_ID: the project ID.

    • TEMPLATE_ZONE: the zone to create the instance template in.

    • INSTANCE_TEMPLATE: the name of the new instance template.

    • MACHINE_TYPE_ZONE: the zone of the machine type.

    • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

    • GROUP_NAME: name of the node group to provision the VM on.

    • NETWORK: the URL of the network resource for this instance template.

    • REGION: the region containing the subnetwork for this instance template.

    • SUBNETWORK: the URL of the subnetwork resource for this instance template.

    • IMAGE_PROJECT: the image project of the image family.

    • IMAGE_FAMILY: the image family of the image to use to create the VM.

    GPUs and local SSDs on sole-tenant nodes (Preview)

    Sole-tenant node support for GPUs and local SSDs is in Preview.

    • To create a managed instance group with GPUs on a sole-tenant node group, use the beta version of the API and add the guestAccelerators block to the request body.

    • To create a managed instance group with local SSDs on a sole-tenant node group, use the beta version of the API to specify the type of local SSD interface for the instances specified by the instance template.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/TEMPLATE_ZONE/instance-templates
    { "name": "INSTANCE_TEMPLATE", "properties": { "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "scheduling": { "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "guestAccelerators": [ { "acceleratorType": GPU_TYPE, "acceleratorCount": GPU_COUNT } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } }, { "type":"SCRATCH", "initializeParams":{ "diskType":"zones/LOCAL_SSD_ZONE/diskTypes/local-ssd" }, "autoDelete":true, "interface":"SSD_INTERFACE" } ] } }

    Replace the following:

    • PROJECT_ID: the project ID.

    • TEMPLATE_ZONE: the zone to create the instance template in.

    • INSTANCE_TEMPLATE: the name of the new instance template.

    • MACHINE_TYPE_ZONE: the zone of the machine type.

    • MACHINE_TYPE: the machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

    • GROUP_NAME: name of the node group to provision the VM on.

    • NETWORK: the URL of the network resource for this instance template.

    • REGION: the region containing the subnetwork for this instance template.

    • SUBNETWORK: the URL of the subnetwork resource for this instance template.

    • GPU_TYPE: the type of GPU. Set to one of the accelerator types specified when the node template was created.

    • GPU_COUNT: the number of GPUs of the total specified by the node template to attach to this VM. Default value is 1.

    • IMAGE_PROJECT: the image project of the image family.

    • IMAGE_FAMILY: the image family of the image to use to create the VM.

    • LOCAL_SSD_ZONE: the local SSD's zone.

    • SSD_INTERFACE: the type of local SSD interface. Set to NVME if the boot disk image drivers are optimized for NVMe, otherwise set to SCSI.

  2. Use the instanceGroupManagers.insert method to create a managed instance group within your sole-tenant node group based on the previously created managed instance group template:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers
    {
      "baseInstanceName": "NAME_PREFIX",
      "name": "INSTANCE_GROUP_NAME",
      "targetSize": SIZE,
      "instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE"
    }
    

    Replace the following:

    • PROJECT_ID: the project ID.

    • ZONE: the zone to create the managed instance group in.

    • NAME_PREFIX: the prefix name for each of the instances in the managed instance group.

    • INSTANCE_GROUP_NAME: the name for the instance group.

    • SIZE: the number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group. Use the managed instance group autoscaler to automatically manage the size of managed instance groups.

    • INSTANCE_TEMPLATE: the URL of the instance template to use to create this group. The template must have a node affinity label pointing to the appropriate node group.

Configuring node affinity labels

Node affinity labels let you logically group node groups and schedule VMs on a specific set of node groups. You can also use node affinity labels to schedule VMs on node groups across different zones, and still keep the node groups in a logical group. The following procedure is an example of using affinity labels to associate VMs with a specific node group that is used for production workloads. This example shows how to schedule a single VM, but you could also use managed instance groups to schedule a group of VMs.

gcloud

  1. Use the gcloud compute sole-tenancy node-templates create command to create a node template with a set of affinity labels for a production workload:

    gcloud compute sole-tenancy node-templates create prod-template \
      --node-type=n1-node-96-624 \
      --node-affinity-labels workload=frontend,environment=prod
    
  2. Use the gcloud compute sole-tenancy node-templates describe command to view the node affinity labels assigned to the node template.

  3. Use the gcloud compute sole-tenancy node-groups create command to create a node group that uses the production template:

    gcloud compute sole-tenancy node-groups create prod-group \
      --node-template=prod-template \
      --target-size=1
    
  4. For your production VMs, create a node-affinity-prod.json file to specify the affinity of your production VMs. For example, you might create a file that specifies that VMs run only on nodes with both the workload=frontend and environment=prod affinities. Create the node affinity file by using Cloud Shell or create it in a location of your choice.

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "IN",
        "values" : ["prod"]
      }
    ]
    
  5. Use the node-affinity-prod.json file with the gcloud compute instances create command to schedule a VM on the node group with matching affinity labels.

    gcloud compute instances create prod-vm \
      --node-affinity-file node-affinity-prod.json \
      --machine-type=n1-standard-2
    
  6. Use the gcloud compute instances describe command and check the scheduling field to view the node affinities assigned to the VM.

Configuring node anti-affinity labels

Node affinity labels can be configured as anti-affinity labels to prevent VMs from running on specific nodes. For example, you can use anti-affinity labels to prevent VMs that you are using for development purposes from being scheduled on the same nodes as your production VM. The following example shows how to use affinity labels to prevent VMs from running on specific node groups. This example shows how to schedule a single VM, but you could also use managed instance groups to schedule a group of VMs.

gcloud

  1. For development VMs, specify the affinity of your development VMs by creating a node-affinity-dev.json with Cloud Shell, or by creating it in a location of your choice. For example, create a file that configures VMs to run on any node group with the workload=frontend affinity as long as it is not environment=prod:

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "NOT_IN",
        "values" : ["prod"]
      }
    ]
    
  2. Use the node-affinity-dev.json file with the gcloud compute instances create command to create the development VM:

    gcloud compute instances create dev-vm \
      --node-affinity-file=node-affinity-dev.json \
      --machine-type=n1-standard-2
    
  3. Use the gcloud compute instances describe command and check the scheduling field to view the node anti-affinities assigned to the VM.

Deleting a node group

If you need to delete a sole-tenant node group, first remove any VMs from the node group.

Console

  1. Go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click the Name of the node group to delete.

  3. For each node in the node group, click the node's name and delete individual VM instances on the node details page, or follow the standard procedure to delete an individual VM. To delete instances in a managed instance group, delete the managed instance group.

  4. After deleting all VM instances running on all nodes of the node group, return to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  5. Click Node groups.

  6. Select the name of the node group you need to delete.

  7. Click Delete.

gcloud

  1. List running VM instances on nodes in the node group by using the gcloud compute sole-tenancy node-groups list-nodes command:

    gcloud compute sole-tenancy node-groups list-nodes GROUP_NAME \
      --zone=ZONE
    

    Replace the following:

    • GROUP_NAME: name of the node group

    • ZONE: zone of the node group

  2. If there are any VMs running on the node group, follow the procedure to delete an individual VM or the procedure to delete a managed instance group.

  3. After deleting all VMs running on all nodes of the node group, delete the node group by using the gcloud compute sole-tenancy node-groups delete command:

    gcloud compute sole-tenancy node-groups delete GROUP_NAME \
        --zone=ZONE
    

    Replace the following:

    • GROUP_NAME: the name of the node group

    • ZONE: the zone of the node group

API

  1. List running VM instances on nodes in the node group by using the nodeGroups.listNodes method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups/GROUP_NAME/listNodes
    

    Replace the following:

    • PROJECT_ID: the project ID

    • ZONE: the zone of the node group

    • GROUP_NAME: the group for which to list the VMs

  2. If there are any VMs running on the node group, follow the procedure to delete an individual VM or the procedure to delete a managed instance group.

  3. After deleting all VMs running on all nodes of the node group, delete the node group by using the nodeGroups.delete method:

    DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups/GROUP_NAME
    

    Replace the following:

    • PROJECT_ID: the project ID

    • ZONE: the zone of the node group

    • GROUP_NAME: the name of the node group to delete

Deleting a node template

You can delete a node template after you've deleted all of the node groups that are using the template.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click Node templates.

  3. Select the name of an unused node template.

  4. Click Delete.

gcloud

Use the gcloud compute sole-tenancy node-templates delete command to delete an unused node template:

gcloud compute sole-tenancy node-templates delete TEMPLATE_NAME \
  --region=REGION

Replace the following:

  • TEMPLATE_NAME: the name of the node template to delete

  • REGION: the region of the node template

API

Use the compute.nodeTemplates.delete method to delete an unused node template:

 DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/nodeTemplates/TEMPLATE_NAME
 

Replace the following:

  • PROJECT_ID: your project ID

  • REGION: the Google Cloud region that contains the node template

  • TEMPLATE_NAME: the name of the node template to delete

What's next