Creating sole-tenant nodes

Create sole-tenant nodes to physically separate your instances from the instances of other projects. Each node is associated with one physical server, and is the only node running on that server. Within your nodes, you can run multiple instances of various sizes without sharing the host hardware with other projects. Nodes can live migrate to a completely new host system without stopping their host VM instances.

Read sole-tenant node pricing to learn how to calculate prices and discounts for sole-tenant nodes.

Read the sole-tenant nodes overview to lean about the benefits, use cases, and features of sole-tenant nodes.

Before you begin

Creating and using sole-tenant nodes

In general, creating instances on sole-tenant nodes requires the following process:

  1. Create a node template, which specifies the node type or vCPU and memory requirements. Also, specify a region and optional node affinity labels.
  2. Use the template to create a node group with one or more sole-tenant nodes. You can reduce the number of nodes to zero when you no longer require nodes.
  3. Create instances on your node groups:
    • Create individual VM instances on the node group using any predefined machine type or custom machine type. The machine type must have two or more vCPUs.
    • Create managed instance groups on the node group using an instance template. Although an autoscaler can control the size of a managed instance group on your nodes, the autoscaler cannot control the size of the node group.

Creating node groups and instances

Create a node template to define the properties for the nodes in a node group. After you create the template, you can use the template to create one or more node groups. Then, create new instances on those node groups.

Console

Create a node and its instances using the Google Cloud Platform Console:

  1. Go to the sole-tenant nodes page.

    Go to the sole-tenant nodes page

  2. Click Create node template to begin creating a node template.
  3. Specify the region where you plan to run your node group.
  4. Specify the node type that you want your node groups to use. Alternatively, you can specify smallest available node, which allows node groups to run on any available node type.
  5. Optionally, add node affinity labels to define which instances automatically schedule themselves on your node groups. If you leave the affinity labels blank, you can still schedule instances on your node groups by the node group name or by the individual node name later.
  6. Click Create to finish creating your node template.

Use your node template to create a node group.

  1. Go to the sole-tenant nodes page.

    Go to the sole-tenant nodes page

  2. Click Create node group to begin creating a node group.
  3. Specify the zone where you want to run your node group. You must have an node template in the same region.
  4. Specify the node template that you want to use.
  5. Specify the number of nodes you want to run in the group. You can change this number later.
  6. Click Create to finish creating your node group.

Create an instance that runs within your node group or on specific nodes. If you used specific node affinity labels, you can create an instance using the normal process and specify node affinity under the Sole Tenancy settings. For this example, create the instances directly within your node group details page.

  1. Go to the sole-tenant nodes page.

    Go to the sole-tenant nodes page

  2. Click the name of the node group where you want to create an instance.
  3. Click Create instance to create an instance anywhere within this node group. If you want your instance to run on a specific node within the group, click the name of an individual node in this group to view the details for that individual node. Then, click Create instance to create the instance on that individual node.
  4. Configure the settings for your instance. Because you already selected your node group or a specific node, the region, zone, and default node affinity labels are already specified for you.
  5. Click Create to finish creating your instance.

gcloud

Create a node and its instances using the gcloud command-line tool:

  1. Use the compute sole-tenancy node-types list command to identify which node types are available to you:

    gcloud compute sole-tenancy node-types list
    

    Periodically, Compute Engine will replace older node types with newer node types. When a node type is replaced, you will be unable to create node groups using the old node type, and must upgrade your node templates to use the new node types.

  2. Use the compute sole-tenancy node-templates create command to create a new node template. Because the list of available node types will change over time, configure your node templates to use flexible node type requirements. For example, specify --node-requirements vCPU=any,memory=any,localSSD=0, which allows the node to run on any available node type with no local SSD capacity.

    gcloud compute sole-tenancy node-templates create [TEMPLATE_NAME] \
        --region [REGION] --node-requirements vCPU=any,memory=any,localSSD=0
    

    where:

    • [TEMPLATE_NAME] is a name for the new node template.
    • [REGION] is the region where you will use this template.

    Alternatively, you can select a specific node type to use in your template. This template is less flexible, but ensures that you create nodes only if they meet your exact vCPU and memory requirements.

     gcloud compute sole-tenancy node-templates create [TEMPLATE_NAME] \
         --node-type [NODE_TYPE] --region [REGION]
    

    where:

    • [TEMPLATE_NAME] is a name for the new node template.
    • [NODE_TYPE] is the node type that you want to use for this template. For example, you can specify the n1-node-96-624 node type to create a node with 96 vCPUs and 624 GB of memory.
    • [REGION] is the region where you will use this template.
  3. After you create the node template, create a node group. Use the compute sole-tenancy node-groups create command:

    gcloud compute sole-tenancy node-groups create [GROUP_NAME] --zone [ZONE] \
        --node-template [TEMPLATE_NAME] --target-size [TARGET_SIZE]
    

    where:

    • [GROUP_NAME] is a name for the new node group.
    • [ZONE] is the zone where this node group is located. This zone must be in the same region as the node template that you are using.
    • [TEMPLATE_NAME] is the name of the node template that you want to use to create this group.
    • [TARGET_SIZE] is the number of nodes that you want to create in the group.
  4. After you create your node group, you can create instances within the node groups using the compute instances create command. Specify a --node-group flag that points to your node group name. For example, you might create an instance with a custom machine type:

    gcloud compute instances create [INSTANCE_NAME] --zone [ZONE] \
        --image-family [IMAGE_FAMILY] --image-project [IMAGE_PROJECT] \
        --node-group [GROUP_NAME] --custom-cpu [VCPUS] --custom-memory [MEMORY]
    

    where:

    • [INSTANCE_NAME] is the name for the new instance.
    • [ZONE] is the zone where your node group is located.
    • [IMAGE_FAMILY] is one of the available image families.
    • [IMAGE_PROJECT] is the image project to which that image family belongs.
    • [GROUP_NAME] is the name of the node group where you want to locate this instance.
    • [VCPUS] is the number of vCPUs that you want to use with this instance.
    • [MEMORY] is the amount of memory for the instance in 256 MB increments. For example, you might specify 5.25GB or 5376MB

    Optionally, you can also create managed instance groups within your node group. Create an instance template using the instance-templates create command, and include a --node-group flag that points to your node group name:

    gcloud compute instance-templates create [INSTANCE_TEMPLATE] \
        --image-family [IMAGE_FAMILY] --image-project [IMAGE_PROJECT] \
        --node-group [GROUP_NAME] \
        --custom-cpu [VCPUS] --custom-memory [MEMORY]
    

    where:

    • [INSTANCE_TEMPLATE] is the name for the new instance template.
    • [IMAGE_FAMILY] is one of the available image families.
    • [IMAGE_PROJECT] is the image project to which that image family belongs.
    • [GROUP_NAME] is the name of the node group where you want to locate this instance.
    • [VCPUS] is the number of vCPUs that you want to use with this instance.
    • [MEMORY] is the amount of memory for the instance in 256 MB increments. For example, you might specify 5.25GB or 5376MB

    Create an instance group using the compute instance-groups managed create command:

    gcloud compute instance-groups managed create [INSTANCE_GROUP_NAME] \
        --zone [ZONE] --size [SIZE] --template [INSTANCE_TEMPLATE]
    

    where you would replace the following:

    • [INSTANCE_GROUP_NAME] is the name for this instance group.
    • [SIZE] is the number of VM instances that you want to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group.
    • [INSTANCE_TEMPLATE] is the name of the instance template that you want to use to create this group. The template must have a node affinity pointing to your desired node group.
    • [ZONE] is the zone where your node group is located.

API

Create a node and its instances using the methods in the Compute Engine API:

  1. In the API, construct a GET request to retrieve a list of available node types using the compute.nodetypes.list method:

    GET https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/nodeTypes
    

    where:

    • [PROJECT_ID] is your project ID.
    • [ZONE] is the zone from which you want to retrieve the available node types.
  2. Construct a POST request to the compute.nodetemplates.insert method to create a new nodetemplate. For the most flexibility, specify a nodeTypeFlexibility property with the cpus and memory values set to any, which allows the node to run on any available node type with no local SSD capacity.

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/nodeTemplates
    
    {
     "name": "[TEMPLATE_NAME]",
     "nodeTypeFlexibility": {
      "cpus": "any",
      "memory": "any"
     }
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [REGION] is the zone from which you want to retrieve the available node types.
    • [TEMPLATE_NAME] is a name for the new node template.

    Alternatively, you can select a specific node type to use in your template. This template is less flexible, but ensures that you create nodes only if they meet your exact vCPU and memory requirements.

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/[REGION]/nodeTemplates
    
    {
     "name": "[TEMPLATE_NAME]",
     "nodeType": "[NODE_TYPE]"
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [TEMPLATE_NAME] is a name for the new node template.
    • [NODE_TYPE] is the node type that you want to use for this template. For example, you can specify the n1-node-96-624 node type to create a node with 96 vCPUs and 624 GB of memory.
    • [REGION] is the region where you will use this template.
  3. After you create the node template, create a node group. Use the compute.nodeGroups.insert method:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/nodeGroups?initialNodeCount=[TARGET_SIZE]
    
    {
     "nodeTemplate": "/regions/[REGION]/nodeTemplates/[TEMPLATE_NAME]",
     "name": "[GROUP_NAME]"
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [ZONE] is the zone where this node group is located. This zone must be in the same region as the node template that you are using.
    • [TARGET_SIZE] is the number of nodes that you want to create in the group.
    • [REGION] is the region where the node template is located.
    • [TEMPLATE_NAME] is a name for the new node template.
    • [GROUP_NAME] is a name for the new node group.
    • [TEMPLATE_NAME] is the name of the node template that you want to use to create this group.
  4. After you create your node group, you can create instances within the node groups using the compute.instances.insert method. Specify a nodeAffinities entry that points to your node group name. For example, you might create an instance with a custom machine type:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances

    {
     "machineType": "/zones/[ZONE]/machineTypes/custom-[VCPUS]-[MEMORY]",
     "name": "[INSTANCE_NAME]",
     "scheduling": {
      "nodeAffinities": [
       {
        "key": "node-group-name",
        "operator": "IN",
        "values": [
         "[GROUP_NAME]"
        ]
       }
      ]
     },
     "networkInterfaces": [
      {
       "network": "/global/networks/[NETWORK]",
       "subnetwork": "/regions/[REGION]/subnetworks/[SUBNETWORK]"
      }
     ],
     "disks": [
      {
       "boot": true,
       "initializeParams": {
        "sourceImage": "/projects/[IMAGE_PROJECT]/global/images/family/[IMAGE_FAMILY]"
       }
      }
     ]
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [INSTANCE_NAME] is the name for the new instance.
    • [ZONE] is the zone where your node group is located.
    • [REGION] is the region where the node template and your subnetwork are located.
    • [IMAGE_FAMILY] is one of the available image families.
    • [IMAGE_PROJECT] is the image project to which that image family belongs.
    • [GROUP_NAME] is the name of the node group where you want to locate this instance.
    • [VCPUS] is the number of vCPUs that you want to use with this instance.
    • [MEMORY] is the amount of memory for the instance in MB. For example, you might specify 5376MB.
    • [NETWORK] is the name of the network to which you want to connect your instance.
    • [SUBNETWORK] is the name of the subnetwork to which you want to connect your instance.

    Optionally, you can also create managed instance groups within your node group. Create an instance template using the compute.instancetempates.insert method and specify a nodeAffinities entry that points to your node group name:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instance-templates
    
    {
    "name": "[TEMPLATE_NAME]",
    "properties": {
       "machineType": "custom-[VCPUS]-[MEMORY]",
       "name": "[INSTANCE_NAME]",
       "scheduling": {
        "nodeAffinities": [
         {
          "key": "node-group-name",
          "operator": "IN",
          "values": [
           "[GROUP_NAME]"
          ]
         }
        ]
       },
       "networkInterfaces": [
        {
         "network": "/global/networks/[NETWORK]",
         "subnetwork": "/regions/[REGION]/subnetworks/[SUBNETWORK]"
        }
       ],
       "disks": [
        {
         "boot": true,
         "initializeParams": {
          "sourceImage": "/projects/[IMAGE_PROJECT]/global/images/family/[IMAGE_FAMILY]"
         }
        }
       ]
      }
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [INSTANCE_NAME] is the name for the new instance.
    • [ZONE] is the zone where your node group is located.
    • [REGION] is the region where the node template and your subnetwork are located.
    • [TEMPLATE_NAME] is the name for the new instance template.
    • [IMAGE_FAMILY] is one of the available image families.
    • [IMAGE_PROJECT] is the image project to which that image family belongs.
    • [GROUP_NAME] is the name of the node group where you want to locate this instance.
    • [VCPUS] is the number of vCPUs that you want to use with this instance.
    • [MEMORY] is the amount of memory for the instance in MB. For example, you might specify 5376MB.
    • [NETWORK] is the name of the network to which you want to connect your instance.
    • [SUBNETWORK] is the name of the subnetwork to which you want to connect your instance.

    Create an instance group using the compute.instancegroupmanagers.insert method:

    POST https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instanceGroupManagers
    
    {
     "baseInstanceName": "default",
     "name": "[INSTANCE_GROUP_NAME]",
     "targetSize": [SIZE],
     "instanceTemplate": "/global/instanceTemplates/[INSTANCE_TEMPLATE]"
    }
    

    where:

    • [PROJECT_ID] is your project ID.
    • [ZONE] is the zone where your node group is located.
    • [INSTANCE_GROUP_NAME] is the name for this instance group.
    • [BASE_INSTANCE_NAME] is the prefix name for each of the instances in your managed instance group.
    • [SIZE] is the number of VM instances that you want to include in this instance group. Your node group must have enough resources to accommodate the instances in this managed instance group.
    • [INSTANCE_TEMPLATE] is the name of the instance template that you want to use to create this group. The template must have a node affinity pointing to your desired node group.

Configuring node affinity

Node affinity determines which nodes your instances and managed instance groups use as host systems. By default, each node has the following affinity labels:

  • Each node in a group has a affinity label that matches the name of the node group:
    • Key: compute.googleapis.com/node-group-name
    • Value: The name of the node group.
  • Each node has an affinity label that matches the name of the node. Node names are generated automatically:
    • Key: compute.googleapis.com/node-name
    • Value: The name of the individual node.

You can configure additional affinity or anti-affinity labels so that your instances run only on the node groups that you want or share nodes only with instances of the same affinity type. This allows you to keep sensitive data together on specific node groups and separate from your other node groups and other VM instances running on Compute Engine.

As an example, if you might want to create one node group for development and a separate node group for production workloads, you could use the following process:

  1. Create two node templates with two different labels, workload=frontend,environment=prod and workload=frontend,environment=dev. The workload label indicates that these node groups are intended for frontend workloads, while the environment label distinguishes between prod environments and dev environments:

    gcloud compute node-templates create production-template \
       --node-requirements vCPU=any,memory=any,localSSD=0 --node-affinity-labels workload=frontend,environment=prod
    

    gcloud compute node-templates create development-template \
       --node-requirements vCPU=any,memory=any,localSSD=0 --node-affinity-labels workload=frontend,environment=dev
    

  2. Create several node groups using the production and development templates. For example, you might have one large production node group and multiple smaller development node groups. Optionally, you could create these groups in different zones and with different target sizes to accommodate the scale of the workloads:

    gcloud compute node-groups create production-group \
        --node-template production-template --target-size 5 --zones us-west1-b
    

    gcloud compute node-groups create development-group-east1 \
        --node-template development-template --target-size 1 --zones us-east1-d
    

    gcloud compute node-groups create development-group-east2 \
        --node-template development-template --target-size 1 --zones us-east1-d
    

  3. For your production instances, create a node-affinity-prod.json file to more clearly articulate how you want the affinity on your production instances to run. For example you might create a file that instructs instances to run only on nodes with both the workload=frontend and environment=prod affinities:

    [{
    "key" : "workload",
    "operator" : "IN",
    "values" : ["frontend"]
    },
    {
    "key" : "environment",
    "operator" : "IN",
    "values" : ["prod"]
    }
    ]
    
  4. Use the node-affinity-prod.json file to create an instance template with the properties that you want for your production VM instances:

    gcloud compute instance-templates create production-template \
        --image-family production-images --image-project my-project \
        --node-affinity-file node-affinity-prod.json \
        --custom-cpu 3 --custom-memory 4096
    

  5. Start an instance group using the production template that runs on your production node:

    gcloud compute instance-groups managed create production-group \
        --zone us-west1-b --size 4 --template production-template
    

    The instances in the group start, and run only on the node groups that have the workload=frontend and environment=prod affinities.

  6. For development instances, create a node-affinity-dev.json file to more clearly articulate how you want the affinity on your development instances to work. For example you might create a file that configures instances to run on any node group with the workload=frontend affinity as long as it is not environment=prod:

    [{
    "key" : "workload",
    "operator" : "IN",
    "values" : ["frontend"]
    },
    {
    "key" : "environment",
    "operator" : "NOT",
    "values" : ["prod"]
    }
    ]
    
  7. For development, you might create an individual instance to do testing rather than an entire instance group. Use the node-affinity-dev.json file to create that instance. For example, if you want to test a specific development image named development-image-1, you would create the instance and configure its affinities with the following command:

    gcloud compute instances create dev-1 \
        --image development-image-1 --image-project my-project \
        --node-affinity-file node-affinity-dev.json \
        --custom-cpu 3 --custom-memory 4096 --zone us-east1-d
    

    This instance starts, and run only on the node groups that have the workload=frontend. However they will not run on any node groups configured with the environment=prod affinity.

To create your own affinity configurations, create node template and run nodes with your own affinity keys and values. Then, configure instances with your own affinity.json files that determine which nodes your instances can run on.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Compute Engine Documentation