Provisioning VMs on sole-tenant nodes

This page describes how to provision VMs on sole-tenant nodes, which are physical servers that run VMs only from a single project. Before provisioning VMs on sole-tenant nodes, read the sole-tenant node overview.

Provisioning a VM on a sole-tenant node requires the following:

  1. Creating a sole-tenant node template. The sole-tenant node template specifies uniform properties for all of the sole-tenant nodes in a sole-tenant node group.

  2. Creating a sole-tenant node group using the previously created sole-tenant node template.

  3. Creating VMs and provisioning them on a sole-tenant node group.

Before you begin

Creating a sole-tenant node template

A sole-tenant node template is a regional resource that specifies properties for each sole-tenant node in a sole-tenant node group that is created from the sole-tenant node template. You must create a sole-tenant node template before you can create a sole-tenant node group.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Node templates.

  3. Click Create node template to begin creating a sole-tenant node template.

  4. Specify a Name for the node template.

  5. Specify a Region to create the node template in. You can use the node template to create node groups in any zone of this region.

  6. Specify the Node type for each sole-tenant node in the node group to create based on this node template.

  7. Optionally add Node affinity labels. Affinity labels let you logically group nodes and node groups, and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti- affinity.

  8. Click Create to finish creating your node template.

gcloud

Use the gcloud compute sole-tenancy node-templates create command to create a node template:

gcloud compute sole-tenancy node-templates create TEMPLATE_NAME \
  --node-type=NODE_TYPE \
  --node-affinity-labels=AFFINITY_LABELS \
  --region=REGION

Replace the following:

  • TEMPLATE_NAME: name for the new node template.

  • NODE_TYPE: node type for sole-tenant nodes created based on this template. Use the gcloud compute sole-tenancy node-types list command to get a list of the node types available in each zone.

  • AFFINITY_LABELS: keys and values, [KEY=VALUE,...], for affinity labels. Affinity labels let you logically group nodes and node groups and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • REGION: region to create the node template in. You can use this template to create node groups in any zone of this region.

API

Use the nodeTemplates.insert method to create a node template:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/nodeTemplates

{
  "name": "TEMPLATE_NAME",
  "nodeType": "NODE_TYPE",
  "nodeAffinityLabels": {
    "KEY": "VALUE",
    ...
  }
}

Replace the following:

  • PROJECT_ID: your project ID.

  • REGION: region to create the node template in. You can use this template to create node groups in any zone of this region.

  • TEMPLATE_NAME: name for the new node template.

  • NODE_TYPE: node type for sole-tenant nodes created based on this template. Use the nodeTypes.list method to get a list of the node types available in each zone.

  • KEY: nodeAffinityLabels value that specifies the key portion of a node affinity label expressed as a key-value pair. Affinity labels let you logically group nodes and node groups, and later, when provisioning VMs, you can specify affinity labels on the VMs to schedule VMs on a specific set of nodes or node groups. For more information, see Node affinity and anti-affinity.

  • VALUE: a nodeAffinityLabels value that specifies the value portion of a node affinity label key-value pair.

Creating a sole-tenant node group

With the previously created sole-tenant node template, create a sole-tenant node group. A sole-tenant node group inherits properties specified by the sole-tenant node template and has additional values that you must specify.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Create node group to begin creating a node group.

  3. Specify a Name for the node group.

  4. Specify the Region for the node group to display the available node templates in that region. You must have a node template in the selected region.

  5. Specify the Zone within the region to create the node group in.

  6. Specify the Node template to create the node group from. The selected node template is applied to the node group.

  7. Choose one of the following for the Autoscaling mode for the node group autoscaler:

    • Don't configure autoscale: Manually manage the size of the node group.

    • Autoscale: Have nodes automatically added to or removed from the node group.

    • Autoscale only out: Add nodes to the node group when extra capacity is required.

  8. Specify the Number of nodes for the group. If you enable the node group autoscaler, either specify a range for the size of the node group, or, specify the number of nodes for the group. You can manually change either value later.

  9. Specify the Maintenance policy for the sole-tenant node group to one of the following values. The maintenance policy lets you configure the behavior of VMs on the node group during host maintenance events. For more information, see Maintenance policies.

    • Default
    • Restart in place
    • Migrate within node group
  10. Click Create to finish creating the node group.

gcloud

Run the gcloud compute sole-tenancy node-groups create command command to create a node group based on a previously created node template:

gcloud compute sole-tenancy node-groups create GROUP_NAME \
  --zone=ZONE \
  --node-template=TEMPLATE_NAME \
  --target-size=TARGET_SIZE \
  --maintenance-policy=MAINTENANCE_POLICY \
  --autoscaler-mode=AUTOSCALER_MODE \
  --min-nodes=MIN_NODES \
  --max-nodes=MAX_NODES

Replace the following:

  • GROUP_NAME: name for the new node group.

  • ZONE: zone to create the node group in. This must be the same region as the node template on which you are basing the node group.

  • TEMPLATE_NAME: name of the node template to use to create this group.

  • TARGET_SIZE: number of nodes to create in the group.

  • MAINTENANCE_POLICY: maintenance policy for the node group. For more information, see Maintenance policies. This must be one of the following values:

    • default
    • restart-in-place
    • migrate-within-node-group
  • AUTOSCALER_MODE: autoscaler policy for the node group. This must be one of:

    • off: manually manage the size of the node group.

    • on: have nodes automatically added to or removed from the node group.

    • only-scale-out: add nodes to the node group when extra capacity is required.

  • MIN_NODES: minimim size of the node group. The default value is 0 and must be an integer value less than or equal to MAX_NODES.

  • MAX_NODES: maximum size of the node group. This must be less than or equal to 100 and greater than or equal to MIN_NODES. Required if AUTOSCALER_MODE is not set to off.

API

Use the nodeGroups.insert method to create a node group based on a previously created node template:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups?initialNodeCount=TARGET_SIZE
{ "nodeTemplate": "regions/REGION/nodeTemplates/TEMPLATE_NAME", "name": "GROUP_NAME", "maintenancePolicy": MAINTENANCE_POLICY, "autoscalingPolicy": { "mode": AUTOSCALER_MODE, "minNodes": MIN_NODES, "maxNodes": MAX_NODES }, }

Replace the following:

  • PROJECT_ID: ID of the project.

  • ZONE: zone to create the node group in. This must be the same region as the node template on which you are basing the node group.

  • TARGET_SIZE: number of nodes to create in the group.

  • REGION: region to create the node group in. You must have a node template in the selected region.

  • TEMPLATE_NAME: name of the node template to use to create this group.

  • GROUP_NAME: name for the new node group.

  • MAINTENANCE_POLICY: maintenance policy for the node group. For more information, see Maintenance policies. This must be one of the following values:

    • DEFAULT
    • RESTART_IN_PLACE
    • MIGRATE_WITHIN_NODE_GROUP
  • AUTOSCALER_MODE: autoscaler policy for the node group. This must be one of the following values:

    • OFF: manually manage the size of the node group.

    • ON: have nodes automatically added to or removed from the node group.

    • ONLY_SCALE_OUT: add nodes to the node group when extra capacity is required.

  • MIN_NODES: minimim size of the node group. The default is 0 and must be an integer value less than or equal to MAX_NODES.

  • MAX_NODES: maximum size of the node group. This must be less than or equal to 100 and greater than or equal to MIN_NODES. Required if AUTOSCALER_MODE is not set to OFF.

Provisioning a sole-tenant VM

After creating a node group based on a previously created node template, you can provision individual VMs on a sole-tenant node group.

To provision a VM on a specific node or node group that has affinity labels that match those you previously assigned to the node template, follow the standard procedure for creating a VM instance, and assign affinity labels to the VM.

Or, you can use the following procedure to provision a VM on a sole-tenant node from the node group details page. Based on the node group that you provision VMs on, Compute Engine assigns affinity labels.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Node groups.

  3. Click the Name of the node group to provision a VM instance on, and then optionally, to provision a VM on a specific sole-tenant node, click the name of the specific sole-tenant node to provision the VM.

  4. Click Create instance to provision a VM instance on this node group, note the values automatically applied for the Name, Region, and Zone, and modify those values as necessary.

  5. Select a Machine configuration by specifying the Machine family, Series, and Machine type. Choose the Series that corresponds to the sole-tenant node type.

  6. Modify the Boot disk, Firewall, and other settings as necessary.

  7. Click Sole Tenancy, note the automatically assigned Node affinity labels, and use Browse to adjust as necessary.

  8. Click Management, and for On host maintenance, choose one of the following:

    • Migrate VM instance (recommended): VM migrated to another node in the node group during maintenance events.

    • Terminate: VM stopped during maintenance events.

  9. Choose one of the following for the Automatic restart:

    • On (recommended): Automatically restarts VMs if they are stopped for maintenance events.

    • Off: Does not automatically restart VMs after a maintenance event.

  10. Click Create to finish creating your sole-tenant VM.

gcloud

Use the gcloud compute instances create command command to provision a VM on a sole-tenant node group:

gcloud compute instances create VM_NAME \
  --zone=ZONE \
  --image-family=IMAGE_FAMILY \
  --image-project=IMAGE_PROJECT \
  --node-group=GROUP_NAME \
  --machine-type=MACHINE_TYPE \
  --maintenance-policy=MAINTENANCE_POLICY \
  --restart-on-failure

The --restart-on-failure flag indicates whether sole-tenant VMs restart after stopping. This flag is enabled by default. Use --no- restart-on-failure to disable.

Replace the following:

  • VM_NAME: name of the new sole-tenant VM.

  • ZONE: zone to provision the sole-tenant VM in.

  • IMAGE_FAMILY: image family of the image to use to create the VM.

  • IMAGE_PROJECT: image project of the image family.

  • GROUP_NAME: name of the node group to provision the VM on.

  • MACHINE_TYPE: machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

API

Use the instances.insert method to provision a VM on a sole-tenant node group:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/VM_ZONE/instances
{ "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "name": "VM_NAME", "scheduling": { "onHostMaintenance": MAINTENANCE_POLICY, "automaticRestart": RESTART_ON_FAILURE, "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } } ], "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ] }

Replace the following:

  • PROJECT_ID: ID of the project.

  • VM_ZONE: zone to provision the sole-tenant VM in.

  • MACHINE_TYPE_ZONE: zone of the machine type.

  • MACHINE_TYPE: machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

  • VM_NAME: name of the new sole-tenant VM.

  • MAINTENANCE_POLICY: specifies restart behavior of sole-tenant VMs during maintenance events. Set to one of the following:

    • MIGRATE: VM migrated to another node in the node group during maintenance events.

    • TERMINATE: VM stopped during maintenance events.

  • RESTART_ON_FAILURE: indicates whether sole-tenant VMs restart after stopping. Default is true.

  • GROUP_NAME: name of the node group to provision the VM on.

  • IMAGE_PROJECT: image project of the image family.

  • IMAGE_FAMILY: image family of the image to use to create the VM.

  • NETWORK: URL of the network resource for this VM.

  • REGION: region containing the subnetwork for this VM.

  • SUBNETWORK: URL of the subnetwork resource for this VM.

Provisioning a group of sole-tenant VMs

Managed instance groups (MIGs) let you provision a group of identical sole-tenant VMs. affinity labels let you specify the sole-tenant node or node group on which to provision the group of sole-tenant VMs.

gcloud

  1. Use the gcloud compute instance-templates create command to create a managed instance group template within your sole-tenant node group:

    gcloud compute instance-templates create INSTANCE_TEMPLATE \
      --machine-type=MACHINE_TYPE \
      --image-project=IMAGE_PROJECT \
      --image-family=IMAGE_FAMILY \
      --node-group=GROUP_NAME
    

    Replace the following:

    • INSTANCE_TEMPLATE: name for the new instance template.

    • MACHINE_TYPE: machine type of the sole-tenant VM. Use the gcloud compute machine-types list command to get a list of available machine types for the project.

    • IMAGE_PROJECT: image project of the image family.

    • IMAGE_FAMILY: image family of the image to use to create the VM.

    • GROUP_NAME: name of the node group to provision the VM on.

  2. Use the gcloud compute instance-groups managed create command command to create a managed instance group within your sole-tenant node group:

    gcloud compute instance-groups managed create INSTANCE_GROUP_NAME \
      --size=SIZE \
      --template=INSTANCE_TEMPLATE \
      --zone=ZONE
    

    Replace the following:

    • INSTANCE_GROUP_NAME: name for this instance group.

    • SIZE: number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group. Use the managed instance group autoscaler to automatically manage the size of managed instance groups.

    • INSTANCE_TEMPLATE: name of the instance template to use to create this group. The template must have a node affinity label pointing to the appropriate node group.

    • ZONE: zone to create the managed instance group in.

API

  1. Use the instanceTemplates.insert method to create a managed instance group template within your sole-tenant node group:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/TEMPLATE_ZONE/instance-templates
    { "name": "INSTANCE_TEMPLATE", "properties": { "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "scheduling": { "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } } ], "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ] } }

    Replace the following:

    • PROJECT_ID: ID of the project.

    • TEMPLATE_ZONE: zone to create the instance template in.

    • INSTANCE_TEMPLATE: name of the new instance template.

    • MACHINE_TYPE_ZONE: zone of the machine type.

    • MACHINE_TYPE: machine type of the sole-tenant VM. Use the machineTypes.list method to get a list of available machine types for the project.

    • GROUP_NAME: name of the node group to provision the VM on.

    • IMAGE_PROJECT: image project of the image family.

    • IMAGE_FAMILY: image family of the image to use to create the VM.

    • NETWORK: URL of the network resource for this instance template.

    • REGION: region containing the subnetwork for this instance template.

    • SUBNETWORK: URL of the subnetwork resource for this instance template.

  2. Use the instanceGroupManagers.insert method to create a managed instance group within your sole-tenant node group based on the previously created managed instance group template:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers
    {
      "baseInstanceName": "NAME_PREFIX",
      "name": "INSTANCE_GROUP_NAME",
      "targetSize": SIZE,
      "instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE"
    }
    

    Replace the following:

    • PROJECT_ID: ID of the project.

    • ZONE: zone to create the managed instance group in.

    • NAME_PREFIX: prefix name for each of the instances in the managed instance group.

    • INSTANCE_GROUP_NAME: name for the instance group.

    • SIZE: number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group. Use the managed instance group autoscaler to automatically manage the size of managed instance groups.

    • INSTANCE_TEMPLATE: URL of the instance template to use to create this group. The template must have a node affinity label pointing to the appropriate node group.

Configuring node affinity labels

Node affinity labels let you logically group node groups and schedule VMs on a specific set of node groups. You can also use node affinity labels to schedule VMs on node groups across different zones, and still keep the node groups in a logical group. The following procedure is an example of using affinity labels to associate VMs with a specific node group that is used for production workloads. This example shows how to schedule a single VM, but you could also use managed instance groups to schedule a group of VMs.

gcloud

  1. Use the gcloud compute sole-tenancy node-templates create command to create a node template with a set of affinity labels for a production workload:

    gcloud compute sole-tenancy node-templates create prod-template \
      --node-type=n1-node-96-624 \
      --node-affinity-labels workload=frontend,environment=prod
    
  2. Use the gcloud compute sole-tenancy node-templates describe command to view the node affinity labels assigned to the node template.

  3. Use the gcloud compute sole-tenancy node-groups create command to create a node group that uses the production template:

    gcloud compute sole-tenancy node-groups create prod-group \
      --node-template=prod-template \
      --target-size=1
    
  4. For your production VMs, create a node-affinity-prod.json file to specify the affinity of your production VMs. For example, you might create a file that specifies that VMs run only on nodes with both the workload=frontend and environment=prod affinities. Create the node affinity file by using Cloud Shell or create it in a location of your choice.

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "IN",
        "values" : ["prod"]
      }
    ]
    
  5. Use the node-affinity-prod.json file with the gcloud compute instances create command to schedule a VM on the node group with matching affinity labels.

    gcloud compute instances create prod-vm \
      --node-affinity-file node-affinity-prod.json \
      --machine-type=n1-standard-2
    
  6. Use the gcloud compute instances describe command and check the scheduling field to view the node affinities assigned to the VM.

Configuring node anti-affinity labels

Node affinity labels can be configured as anti-affinity labels to prevent VMs from running on specific nodes. For example, you can use anti-affinity labels to prevent VMs that you are using for development purposes from being scheduled on the same nodes as your production VM. The following example shows how to use affinity labels to prevent VMs from running on specific node groups. This example shows how to schedule a single VM, but you could also use managed instance groups to schedule a group of VMs.

gcloud

  1. For development VMs, specify the affinity of your development VMs by creating a node-affinity-dev.json with Cloud Shell, or by creating it in a location of your choice. For example, create a file that configures VMs to run on any node group with the workload=frontend affinity as long as it is not environment=prod:

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "NOT_IN",
        "values" : ["prod"]
      }
    ]
    
  2. Use the node-affinity-dev.json file with the gcloud compute instances create command to create the development VM:

    gcloud compute instances create dev-vm \
      --node-affinity-file=node-affinity-dev.json \
      --machine-type=n1-standard-2
    
  3. Use the gcloud compute instances describe command and check the scheduling field to view the node anti-affinities assigned to the VM.

Deleting a node group

If you need to delete a sole-tenant node group, first remove any VMs from the node group.

Console

  1. Go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click the Name of the node group to delete.

  3. For each node in the node group, click the node's name and delete individual VM instances on the node details page, or follow the standard procedure to delete an individual VM. To delete instances in a managed instance group, delete the managed instance group.

  4. After deleting all VM instances running on all nodes of the node group, return to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  5. Click Node groups.

  6. Select the name of the node group you need to delete.

  7. Click Delete.

gcloud

  1. List running VM instances on nodes in the node group by using the gcloud compute sole-tenancy node-groups list-nodes command:

    gcloud compute sole-tenancy node-groups list-nodes GROUP_NAME \
      --zone=ZONE
    

    Replace the following:

    • GROUP_NAME: name of the node group

    • ZONE: zone of the node group

  2. If there are any VMs running on the node group, follow the procedure to delete an individual VM or the procedure to delete a managed instance group.

  3. After deleting all VMs running on all nodes of the node group, delete the node group by using the gcloud compute sole-tenancy node-groups delete command:

    gcloud compute sole-tenancy node-groups delete GROUP_NAME \
        --zone=ZONE
    

    Replace the following:

    • GROUP_NAME: name of the node group

    • ZONE: zone of the node group

API

  1. List running VM instances on nodes in the node group by using the nodeGroups.listNodes method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups/GROUP_NAME/listNodes
    

    Replace the following:

    • PROJECT_ID: ID of the project

    • ZONE: zone of the node group

    • GROUP_NAME: group for which to list the VMs

  2. If there are any VMs running on the node group, follow the procedure to delete an individual VM or the procedure to delete a managed instance group.

  3. After deleting all VMs running on all nodes of the node group, delete the node group by using the nodeGroups.delete method:

    DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups/GROUP_NAME
    

    Replace the following:

    • PROJECT_ID: ID of the project

    • ZONE: zone of the node group

    • GROUP_NAME: name of the node group to delete

Deleting a node template

You can delete a node template after you've deleted all of the node groups that are using the template.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click Node templates.

  3. Select the name of an unused node template.

  4. Click Delete.

gcloud

Use the gcloud compute sole-tenancy node-templates delete command to delete an unused node template:

gcloud compute sole-tenancy node-templates delete TEMPLATE_NAME \
    --region=REGION

Replace the following:

  • TEMPLATE_NAME: name of the node template to delete

  • REGION: region of the node template

API

Use the compute.nodeTemplates.delete method to delete an unused node template:

 DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/nodeTemplates/TEMPLATE_NAME
 

Replace the following:

  • PROJECT_ID: your project ID

  • REGION: the Google Cloud region that contains the node template

  • TEMPLATE_NAME: the name of the node template to delete

What's next