Provisioning VMs on sole-tenant nodes

This page describes how to provision VM instances on sole-tenant nodes. Sole-tenant nodes are physical servers that are dedicated to running your specific project's VMs. Before provisioning VMs on sole-tenant nodes, read the overview for sole-tenant nodes.

The process for provisioning VMs on sole-tenant nodes involves:

  1. Creating a sole-tenant node template to specify node properties
  2. Creating a sole-tenant node group from the template
  3. Creating VMs to use sole-tenant nodes

Before you begin

Creating a sole-tenant node template

Before you can create node groups and provision VMs on those groups, create a sole-tenant node template. A node template is a regional resource that defines node properties for nodes in node groups created from the template.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Node templates.

  3. Click Create node template to begin creating a node template.

  4. Specify a Name for the node template.

  5. Specify a Region. You can use the node template to create node groups in any zone of this region.

  6. Specify the Node type for each sole-tenant node in the node group that you will create based off of this node template.

  7. Add Node affinity labels. Affinity labels let you schedule VMs using sole tenancy. Nodes in node groups created from this template use only these affinity labels. You cannot add affinity labels to a node group separately. For more information, see Node affinity and anti-affinity.

  8. Click Create to finish creating your node template.

gcloud

  1. Use the compute sole-tenancy node-types list command to get a list of sole tenant node types available in each Google Cloud zone.

    gcloud compute sole-tenancy node-types list
    
  2. Use the compute sole-tenancy node-templates create command to create a new node template by specifying node type:

    gcloud compute sole-tenancy node-templates create template-name \
      --region=region \
      --node-type=node-type \
      --node-affinity-labels=affinity-labels \
      --server-binding=server-binding
    

    Alternatively, use the following command to create a node template based on node requirements instead of a specific node type:

    gcloud compute sole-tenancy node-templates create template-name \
      --region=region \
      --node-requirements=node-requirements \
      --server-binding=server-binding
    

    Replacing the following:

    • template-name: name for the new node template.
    • region: region where you will use this template.

    • node-type: node type to use for this template. For example, specify the n1-node-96-624 node type to create a node with 96 vCPUs and 624 GB of memory.

    • node-requirements: a list of vCPU, memory, and local SSD requirements using a string like --node-requirements="vCPU=any,memory=any,localSSD=0". For more information, see --node-requirements.

    • affinity-labels: keys and values for affinity labels. Affinity labels let you schedule VM instances using sole tenancy. Nodes in node groups created from this template use only these affinity labels. You cannot add affinity labels to a node group separately. For more information, see node affinity labels.

    • server-binding: lets you control how Google Cloud maps nodes to physical servers. Specify restart-node-on-any-server, or omit the server-binding flag to allow Google Cloud to provision any physical server to act as a node following a node maintenance event. Specify restart-node-on-minimal-servers to force Google Cloud to to use the same physical server for the node. For more information, see --server-binding.

API

  1. Use the compute.nodeTypes.list method to get a list of node types:

    GET https://compute.googleapis.com/compute/v1/projects/project-id/zones/zone/nodeTypes
    

    Replace the following:

    • project-id: your project ID
    • zone: the Google Cloud zone from which to retrieve the available node types.
  2. Use the compute.nodeTemplates.insert method to create a node template that uses a node type:

    POST https://compute.googleapis.com/compute/v1/projects/project-id/regions/region/nodeTemplates
    
        {
         "name": "template-name",
         "nodeType": "node-type",
         "nodeAffinityLabels": {
             "key": "value",
             ....
         },
         "serverBinding":
         {
             "type": "server-binding"
         }
        }
    

    Alternatively, you can use the compute.nodeTemplates.insert method to create a node template using node requirements instead of a node type:

    POST https://compute.googleapis.com/compute/v1/projects/project-id/regions/region/nodeTemplates
    
        {
         "name": "template-name",
         "nodeTypeFlexibility": {
          "cpus": "any",
          "memory": "any"
         },
         "nodeAffinityLabels": {
             "key": "value",
             ....
         },
         "serverBinding":
         {
             "type": "server-binding"
         }
        }
    

    Replace the following:

    • project-id: your project ID.
    • region: the Google Cloud region in which to create the node template. You can use this template to create node groups in any zone of this region.
    • template-name: the name of the node template to create.
    • node-type: the node type to use for this template. For example, specify the n1-node-96-624 node type to create a node with 96 vCPUs and 624 GB of memory. In the second example, the nodeTypeFlexibility property allows you to specify node requirements, including CPU and memory requirements. You can use the word any if you do not have specific requirements.
    • Use nodeAffinityLabels to specify key and value pairs that define node affinity labels. Affinity labels let you schedule VM instances using sole tenancy. Nodes in node groups created from this template use only these affinity labels. You cannot add affinity labels to a node group separately. For more information, see node affinity labels.
    • server-binding: lets you control how Google Cloud maps nodes to physical servers with this parameter. Specify restart-node-on-any-server or omit the serverBinding parameter to allow Google Cloud to provision any physical server to act as a node following a node maintenance event. Specify restart-node-on-minimal-servers to force Google Cloud to use the same physical server for the node. For more information, see --server-binding.

Creating a sole-tenant node group

After creating a node template, create a node group based on that template. When creating a node group, you must specify the following values, which apply to each node in the group:

  • Region: Choose the region that contains the node template to apply to the node group.

  • Zone: Choose the zone within the region where you want to create the node group.

  • Node template: Choose the node template you want to create the node group from.

  • Number of nodes: Specify the number of sole-tenant nodes to create within the node group and use the node group autoscaler to manage the size of your node groups. Depending on the number of nodes that you create, you might need to request additional quota. For example, if you create a node group of size 2, and you chose n1-node-96-624 as the node type, you need a vCPU quota of at least 192.

  • Maintenance policy: Specify the maintenance policy for VMs during host maintenance events. Choose whether VMs on the host are live migrated to a new physical server, live migrated within the pool of physical servers used by the node group, or are terminated and restarted on the same physical server. Use the default maintenance policy unless you have physical server affinity requirements, for example, if you have software licenses that are assessed per physical core.

  • Autoscaled node groups: Enable the node group autoscaler so that the number of nodes in your node group is automatically increased or decreased to meet your needs.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to Sole-tenant nodes

  2. Click Create node group to begin creating a node group.

  3. Specify a Name for the node group.

  4. Specify the Region for the node group to display the available node templates in that Region.

  5. Specify the Zone within the region where you want to run your node group.

  6. Specify the Node template that you want to use. You must have a node template in the selected Region.

  7. Specify the number of nodes you want to run in the group. You can manually change this number later, or you can enable the node group autoscaler. The node group autoscaler is currently in beta.

  8. Specify the Maintenance policy:

    • Default: Live migrates VM to a new physical server.
    • Restart in place: Restarts VMs on the same physical server.
    • Migrate within node group: Migrates the VM within the node group.
  9. Click Create to finish creating your node group.

gcloud

Run the following command to create a node group based off of a previously created node template:

gcloud compute sole-tenancy node-groups create group-name \
  --zone=zone \
  --node-template=template-name \
  --target-size=target-size \
  --maintenance-policy=maintenance-policy \
  --autoscaler-mode=autoscaler-mode \
  --min-nodes=min-nodes \
  --max-nodes=max-nodes

Replace the following:

  • group-name: Name for the new node group.
  • zone: Zone in which to create the node group. This must be the same region as the node template on which you are basing the node group.
  • template-name: Name of the node template to use to create this group.
  • target-size: Number of nodes to create in the group.
  • maintenance-policy: Maintenance policy for the node group. This must be one of:
    • default: VMs are live migrated to a new physical server.
    • migrate-within-node-group: VMs are live migrated to another node in the node group. This policy is in beta, and must be specified by using the gcloud beta compute sole-tenancy node-groups create command.
    • restart-in-place: VMs are terminated and restarted on the same physical server after the maintenance event.
  • autoscaler-mode: Autoscaler policy for the node group. This must be one of:
    • off: Turns off autoscaling on the node group.
    • on: Enables autoscaling in and out on the node group.
    • only-scale-out: Enabling autoscaling only out.
  • min-nodes: Minimim size of the node group. Default is 0, and must be an integer value less than or equal to max- nodes.
  • max-nodes: Maximum size of the node group. This must be less than or equal to 100 and greater than or equal to min- nodes. Required if autoscaler-mode is not set to off.

API

After you create the node template, create a node group. Use the nodeGroups.insert method to create a node group:

POST https://www.googleapis.com/compute/v1/projects/project-id/zones/zone/nodeGroups?initialNodeCount=target-size
{ "nodeTemplate": "regions/region/nodeTemplates/template-name", "name": "group-name", "maintenancePolicy": maintenance-policy, "autoscalingPolicy": { "mode": autoscaler-mode, "minNodes": min-nodes, "maxNodes": max-nodes }, }

Replace the following:

  • project-id: ID of the project.
  • zone: Zone in which to create the node group. Must be in the same region as the node template on which you are basing the node group.
  • target-size: Number of nodes to create in the group.
  • region: Region where the node template to base this node group on is located.
  • template-name: Name of the node template to use to create this group.
  • group-name: Name for the new node group.
  • maintenance-policy: Maintenance policy for the node group. This must be one of:
    • DEFAULT: VMs follow traditional maintenance behavior during host maintenance events, and can live migrate to a new host if the VM is configured to migrate during host maintenance events. VMs not set to migrate during host maintenance events are terminated.
    • MIGRATE_WITHIN_NODE_GROUP: VMs are live migrated to another node in the node group. This policy is in beta, and must be specified with the gcloud beta compute sole-tenancy node-groups create command.
    • RESTART_IN_PLACE: VMs are terminated and restarted on the same physical server after the maintenance event.
  • autoscaler-mode: Autoscaler policy for the node group. This must be one of:
    • OFF: Turns off autoscaling on the node group.
    • ON: Enables autoscaling in and out on the node group.
    • ONLY_SCALE_OUT: Enabling autoscaling only out.
  • max-nodes: Maximum size of the node group. Must be less than or equal to 100 and greater than or equal to min- nodes. Must be specified if autoscaler-mode is not set to off.
  • min-nodes: Minimim size of the node group. Default is 0, and must be an integer value less than or equal to max-nodes.

Provisioning individual VMs on a sole-tenant node

After creating a node group based off of a node template, provision individual VMs directly on a sole-tenant node or node group by using any predefined machine type or custom machine type with 2 or more vCPUs. When you provision a VM in this way, Compute Engine automatically assigns affinity labels based on the node or node group on which you provision the VM.

The procedure below describes how to provision an individual VM on a node group.

Console

  • Using node affinity labels: Follow the standard procedure for creating a VM instance, using the Cloud Console to associate a new VM instance with a sole-tenant node by using affinity labels.

  • Alternatively, create a VM instance from the node group details page:

    1. In the Google Cloud Console, go to the Sole-tenant nodes page.

      Go to Sole-tenant nodes

    2. Click Node groups.

    3. Click the Name of the node group on which to provision a VM instance.

    4. Click Create instance to provision a VM instance on this node group, and note the values automatically applied by Compute Engine for the Region, Zone, Machine configuration, Node affinity labels, On host maintenance, and Automatic restart, and modify the values according to the requirements of your workload.

    5. Click Create to finish creating your instance.

gcloud

Provision a VM with a custom machine type on a node group with the following command:

gcloud compute instances create vm-name \
  --zone=zone \
  --image-family=image-family \
  --image-project=image-project \
  --node-group=group-name \
  --custom-cpu=vcpus \
  --custom-memory=memory \
  --maintenance-policy=maintenance-policy \
  --restart-on-failure

Replace the following:

  • vm-name: Name of the new VM.
  • zone: Zone in which to create the new VM.
  • image-family: Image family containing the image to use to create the VM.
  • image-project: Image project to which the image family belongs.
  • group-name: Name of the node group to provision the VM on.
  • vcpus: Number of vCPUs to use with this VM.
  • memory: Amount of memory in 256 MB increments. For example, 5.25GB or 5376MB.
  • maintenance-policy: Specifies behavior of VMs when undergoing maintenance. Set to one of the following:
    • MIGRATE: VMs migrated to new host.
    • TERMINATE: VMs terminated.
  • --restart-on-failure: Parameter indicating whether to restart VMs after they are terminated. Use --no-restart-on- failure to disable.

API

Provision a VM with a custom machine type on a node group with the following instances.insert REST request:

POST https://compute.googleapis.com/compute/v1/projects/project-id/zones/zone/instances
{ "machineType": "zones/zone/machineTypes/custom-vcpus-memory", "name": "vm-name", "scheduling": { "onHostMaintenance": maintenance-policy, "automaticRestart": restart-on-failure, "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "group-name" ] } ] }, "networkInterfaces": [ { "network": "global/networks/network", "subnetwork": "regions/region/subnetworks/subnetwork" } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/image-project/global/images/family/image-family" } } ] }

Replace the following:

  • project-id: ID of the project.
  • zone: Zone in which to create the new VM.
  • vcpus: Number of vCPUs to use with this VM.
  • memory: Amount of memory in 256 MB increments. For example, 5.25GB or 5376MB.
  • vm-name: Name of the new VM.
  • maintenance-policy: Specifies behavior of VMs when undergoing maintenance. Set to one of the following:
    • MIGRATE: VMs migrated to new host.
    • TERMINATE: VMs terminated.
  • restart-on-failure: Parameter indicating whether to restart VMs after they are terminated. Default is true.
  • group-name: Name of the node group to provision the VM on.
  • network: URL of the network for this VM.
  • region: Region containing the subnetwork for this VM.
  • subnetwork: URL of the subnetwork for this VM.
  • image-project: Image project to which the image family belongs.
  • image-family: Image family containing the image to use to create the VM.

Provisioning multiple VMs on a sole-tenant node

To provision multiple identical VMs simultaneously on a node or node group, use managed instance groups (MIGs), and use an affinity label to specify which sole-tenant node or node group the instance group runs on.

For automatically managing the size of a managed instance group, use the MIG autoscaler, and to automatically manage the size of node groups, use the node group autoscaler.

The following command shows how to create a managed instance group with a custom machine type.

gcloud

  1. Create a MIG template within your node group by using the gcloud compute instance-templates create command:

    gcloud compute instance-templates create instance-template \
    --image-family=image-family \
    --image-project=image-project \
    --node-group=group-name \
    --custom-cpu=vcpus \
    --custom-memory=memory
    

    Replace the following:

    • instance-template: Name for the new instance template.
    • image-family: Image family containing the image to use to create the VM.
    • image-project: Image project to which the image family belongs.
    • group-name: Name of the node group to provision the VM on.
    • vcpus: Number of vCPUs to use with this VM.
    • memory: Amount of memory in 256 MB increments. For example, 5.25GB or 5376MB.
  2. Create an instance group by using the gcloud compute instance-groups managed create command:

    gcloud compute instance-groups managed create instance-group-name --zone zone --size size --template instance-template
    

    Replace the following:

    • instance-group-name: Name for this instance group.
    • zone: Zone in which to create the managed instance group.
    • size: Number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group.
    • instance-template: Name of the instance template to use to create this group. The template must have a node affinity pointing to the appropriate node group.

API

  1. Create a MIG template within your node group by using the instanceTemplates.insert REST request:

    POST https://www.googleapis.com/compute/v1/projects/project-id/zones/zone/instance-templates
    { "name": "template-name", "properties": { "machineType": "custom-vcpus-memory", "scheduling": { "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "group-name" ] } ] }, "networkInterfaces": [ { "network": "global/networks/network", "subnetwork": "regions/region/subnetworks/subnetwork" } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/image-project/global/images/family/image-family" } } ] } }

    Replace the following:

    • project-id: ID of the project.
    • zone: Zone in which to create the new instance template.
    • template-name: Name for the new instance template.
    • vcpus: Number of vCPUs to use for each VM in the instance group.
    • memory: Amount of memory in 256 MB increments for each VM in the instance group. For example, 5.25GB or 5376MB.
    • group-name: Name of the node group to provision the VM on.
    • network: URL of the network for this VM.
    • region: Region containing the subnetwork for this VM.
    • subnetwork: URL of the subnetwork for this VM.
    • image-project: Image project to which the image family belongs.
    • image-family: Image family containing the image to use to create the VM.
  2. Create an instance group by using the instanceGroupManagers.create command:

    POST https://www.googleapis.com/compute/v1/projects/project-id/zones/zone/instanceGroupManagers
    {
     "baseInstanceName": "name-prefix",
     "name": "instance-group-name",
     "targetSize": sizevar>,
     "instanceTemplate": "global/instanceTemplates/instance-template"
    }
    

    Replace the following:

    • project-id: ID of the project.
    • zone: Zone in which to create the managed instance group.
    • name-prefix: Prefix name for each of the instances in your managed instance group.
    • instance-group-name: Name for this instance group.
    • size: Number of VMs to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group.
    • instance-template: Name of the instance template that you want to use to create this group. The template must have a node affinity pointing to the appropriate node group.

Instance affinity example

Node affinity labels let you logically group node groups and schedule VMs on a specific set of node groups. You can also use node affinity labels to schedule VMs on node groups across different zones, and still keep the node groups in a logical group. The following procedure is an example of how to use affinity labels to associate VMs with a specific node group:

gcloud

  1. Create a node template with a set of affinity labels for a production workload:

    gcloud compute sole-tenancy node-templates create production-template --node-requirements vCPU=any,memory=any,localSSD=0 --node-affinity-labels workload=frontend,environment=prod
    
  2. Create another node template with a set of affinity labels for the development workload:

    gcloud compute sole-tenancy node-templates create development-template --node-requirements vCPU=any,memory=any,localSSD=0 --node-affinity-labels workload=frontend,environment=dev
    
  3. Create several node groups using the production and development templates. For example, you might have one large production node group and multiple smaller development node groups. Optionally, create these groups in different zones and with different target sizes to accommodate the scale of the workloads:

    gcloud compute sole-tenancy node-groups create production-group --node-template production-template --target-size 5 --zones us-west1-b
    
    gcloud compute sole-tenancy node-groups create development-group-east1 --node-template development-template --target-size 1 --zones us-east1-d
    
    gcloud compute sole-tenancy node-groups create development-group-east2 --node-template development-template --target-size 1 --zones us-east1-d
    
  4. For your production VMs, create a node-affinity-prod.json file to specify the affinity of your production VMs. For example you might create a file that instructs VMs to run only on nodes with both the workload=frontend and environment=prod affinities:

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "IN",
        "values" : ["prod"]
      }
    ]
    
  5. Use the node-affinity-prod.json file to create a MIG template with the properties that you want for your production VMs:

    gcloud compute instance-templates create production-template --image-family production-images --image-project my-project --node-affinity-file node-affinity-prod.json --custom-cpu 3 --custom-memory 4096
    
  6. Start an instance group using the MIG production template that runs on your production node:

    gcloud compute instance-groups managed create production-group --zone us-west1-b --size 4 --template production-template
    

    At this point, the VMs in the group start, and run, only on the node groups that have the workload=frontend and environment=prod affinities.

Instance anti-affinity example

Use anti-affinity labels to ensure that your VMs are not provisioned on nodes. For example, create some VMs for development purposes, but prevent Compute Engine from scheduling them on the same nodes that host your production VMs. The following procedure is an example of how to use affinity labels to make sure VMs are not associated with specific node groups:

gcloud

  1. For development instances, create a node-affinity-dev.json file to specify the affinity of your development VMs. For example, create a file that configures VMs to run on any node group with the workload=frontend affinity as long as it is not environment=prod:

    [
      {
        "key" : "workload",
        "operator" : "IN",
        "values" : ["frontend"]
      },
      {
        "key" : "environment",
        "operator" : "NOT",
        "values" : ["prod"]
      }
    ]
    
  2. For development, you might create an individual VM for testing purposes, rather than an entire instance group. Use the node-affinity-dev.json file to create that VM. For example, if you want to test a specific development image named development-image-1, create the VM and configure its affinities with the following command:

    gcloud compute instances create dev-1 --image development-image-1 --image-project my-project --node-affinity-file node-affinity-dev.json --custom-cpu 3 --custom-memory 4096 --zone us-east1-d
    

    This instance starts, and runs, only on the node groups that have the workload=frontend. However, this VM will not run on any node groups configured with the environment=prod affinity.

Deleting a node template

You can delete a node template after you've deleted all node groups using the template.

Console

  1. In the Google Cloud Console, go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click Node templates.

  3. Select the name of an unused node template.

  4. Click Delete.

gcloud

Use the compute sole-tenancy node-templates delete command to delete an unused node template:

gcloud compute sole-tenancy node-templates delete template-name \
    --region=region

Replace the following:

  • template-name: name of the node template to delete
  • region: region of the node template

API

Use the compute.nodeTemplates.delete method to delete an unused node template:

 DELETE https://compute.googleapis.com/compute/v1/projects/project-id/regions/region/nodeTemplates/template-name
 

Replace the following:

  • project-id: your project ID
  • region: the Google Cloud region that contains the node template
  • template-name: the name of the node template to delete

Deleting a node group

You can delete a node group as long as no VM instances are running on its nodes.

Console

  1. Go to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  2. Click the name of the node group to delete.

  3. For each node in the node group, click the node's name. Ensure that no VM instances are running on the node.

  4. You can delete individual VM instances on the node details page, or you can follow the standard procedure to delete an individual VM instance. To delete instances in a managed instance group, you must delete the managed instance group.

  5. After deleting all VM instances running on all nodes of the node group, return to the Sole-tenant nodes page.

    Go to the Sole-tenant nodes page

  6. Click Node groups.

  7. Select the name of the node group you need to delete.

  8. Click Delete.

gcloud

  1. List running VM instances on nodes in the node group by using the compute sole-tenancy node-groups list-nodes command:

    gcloud compute sole-tenancy node-groups list-nodes group-name \
     --zone=zone
    

    Replace the following:

    • group-name:name of the node group
    • zone: zone of the node group
  2. If there are any VM instances running on the node group, follow the standard procedure to delete an individual VM instance or the standard procedure to delete a managed instance group, as required.

  3. After deleting all VM instances running on all nodes of the node group, delete the node group by using the compute sole-tenancy node-groups delete command:

    gcloud compute sole-tenancy node-groups delete group-name \
     --zone=zone
    

    Replace the following:

    • group-name: name of the node group
    • zone: zone of the node group

What's next