Create groups of GPU VMs in bulk


You can create a group of virtual machines (VMs) that have attached graphical processing units (GPUs) by using the bulk creation process. With the bulk creation process, you get upfront validation where the request fails fast if it is not feasible. Also, if you use the region flag, the bulk creation API automatically chooses the zone that has the capacity to fulfill the request. For more information about bulk creation, see About bulk creation of VMs.

Before you begin

  • To review additional prerequisite steps such as selecting an OS image and checking GPU quota, review the overview document.
  • If you haven't already, set up authentication. Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine as follows.

    Select the tab for how you plan to use the samples on this page:

    gcloud

    1. Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init
    2. Set a default region and zone.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI, then initialize it by running the following command:

      gcloud init

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles

To get the permissions that you need to create VMs, ask your administrator to grant you the Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.

This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to create VMs:

  • compute.instances.create on the project
  • To use a custom image to create the VM: compute.images.useReadOnly on the image
  • To use a snapshot to create the VM: compute.snapshots.useReadOnly on the snapshot
  • To use an instance template to create the VM: compute.instanceTemplates.useReadOnly on the instance template
  • To assign a legacy network to the VM: compute.networks.use on the project
  • To specify a static IP address for the VM: compute.addresses.use on the project
  • To assign an external IP address to the VM when using a legacy network: compute.networks.useExternalIp on the project
  • To specify a subnet for your VM: compute.subnetworks.use on the project or on the chosen subnet
  • To assign an external IP address to the VM when using a VPC network: compute.subnetworks.useExternalIp on the project or on the chosen subnet
  • To set VM instance metadata for the VM: compute.instances.setMetadata on the project
  • To set tags for the VM: compute.instances.setTags on the VM
  • To set labels for the VM: compute.instances.setLabels on the VM
  • To set a service account for the VM to use: compute.instances.setServiceAccount on the VM
  • To create a new disk for the VM: compute.disks.create on the project
  • To attach an existing disk in read-only or read-write mode: compute.disks.use on the disk
  • To attach an existing disk in read-only mode: compute.disks.useReadOnly on the disk

You might also be able to get these permissions with custom roles or other predefined roles.

Overview

When creating VMs with attached GPUs using the bulk creation method, you can choose to create VMs in a region (such as us-central1) or in a specific zone such as (us-central1-a).

If you choose to specify a region, Compute Engine places the VMs in any zone within the region that supports GPUs.

Create groups of accelerator-optimized VMs

The accelerator-optimized machine family is available in A3 High and A3 Mega, A2 Standard and A2 Ultra, and the G2 machine types.

Each accelerator-optimized machine type has a specific model of NVIDIA GPUs attached.

  • For A3 accelerator-optimized machine types, NVIDIA H100 80GB GPUs are attached. These are available in the following options:
    • A3 High (a3-highgpu-8g): this machine type has H100 80GB GPUs attached
    • A3 Mega (a3-megagpu-8g): this machine type has H100 80GB Mega GPUs attached
  • For A2 accelerator-optimized machine types, NVIDIA A100 GPUs are attached. These are available in the following options:
    • A2 Standard (a2-highgpu-*, a2-megagpu-*): these machine types have A100 40GB GPUs attached
    • A2 Ultra (a2-ultragpu-*): these machine types have A100 80GB GPUs attached
  • For G2 accelerator-optimized machine types (g2-standard-*), NVIDIA L4 GPUs are attached.

You create a group of accelerator-optimized VMs by using either the Google Cloud CLI, or REST.

gcloud

To create a group of VMs, use the gcloud compute instances bulk create command. For more information about the parameters and how to use this command, see Create VMs in bulk.

The following optional flags are shown in the example command:

  • The --provisioning-model=SPOT is an optional flag that configures your VMs as Spot VMs. If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled.

  • The --accelerator flag to specify a virtual workstation. NVIDIA RTX Virtual Workstations (vWS) are supported for only G2 VMs.

Example

This example creates two VMs that have attached GPUs by using the following specifications:

gcloud compute instances bulk create \
    --name-pattern="my-test-vm-#" \
    --region=REGION \
    --count=2 \
    --machine-type=MACHINE_TYPE \
    --boot-disk-size=200 \
    --image=IMAGE \
    --image-project=IMAGE_PROJECT \
    --on-host-maintenance=TERMINATE \
    [--provisioning-model=SPOT] \
    [--accelerator=type=nvidia-l4-vws,count=VWS_ACCELERATOR_COUNT]

Replace the following:

  • REGION: the region for the VMs. This region must support your selected GPU model.
  • MACHINE_TYPE: the machine type that you selected. Choose from one of the following:

    • An A3 machine type.
    • An A2 machine type.
    • A G2 machine type. G2 machine types also support custom memory. Memory must be a multiple of 1024 MB and within the supported memory range. For example, to create a VM with 4 vCPUs and 19 GB of memory specify --machine-type=g2-custom-4-19456.
  • IMAGE: an operating system image that supports GPUs.

    If you want to use the latest image in an image family, replace the --image flag with the --image-family flag and set its value to an image family that supports GPUs. For example: --image-family=rocky-linux-8-optimized-gcp.

    You can also specify a custom image or Deep Learning VM Images.

  • IMAGE_PROJECT: the Compute Engine image project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.

  • VWS_ACCELERATOR_COUNT: the number of virtual GPUs that you need.

If successful, the output is similar to the following:

NAME          ZONE
my-test-vm-1  us-central1-b
my-test-vm-2  us-central1-b
Bulk create request finished with status message: [VM instances created: 2, failed: 0.]

REST

Use the instances.bulkInsert method with the required parameters to create multiple VMs in a zone. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

This example creates two VMs that have attached GPUs by using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • Each VM has two GPUs attached, specified by using the appropriate accelerator-optimized machine type

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instances/bulkInsert
    {
    "namePattern":"my-test-vm-#",
    "count":"2",
    "instanceProperties": {
      "machineType":MACHINE_TYPE,
      "disks":[
        {
          "type":"PERSISTENT",
          "initializeParams":{
            "diskSizeGb":"200",
            "sourceImage":SOURCE_IMAGE_URI
          },
          "boot":true
        }
      ],
      "name": "default",
      "networkInterfaces":
      [
        {
          "network": "projects/PROJECT_ID/global/networks/default"
        }
      ],
      "scheduling":{
        "onHostMaintenance":"TERMINATE",
        ["automaticRestart":true]
      }
    }
    }
    

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region for the VMs. This region must support your selected GPU model.
  • MACHINE_TYPE: the machine type that you selected. Choose from one of the following:

    • An A2 machine type.
    • A G2 machine type. G2 machine types also support custom memory. Memory must be a multiple of 1024 MB and within the supported memory range. For example, to create a VM with 4 vCPUs and 19 GB of memory specify --machine-type=g2-custom-4-19456.
  • SOURCE_IMAGE_URI: the URI for the specific image or image family that you want to use.

    For example:

    • Specific image: "sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
    • Image family: "sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp".

    When you specify an image family, Compute Engine creates a VM from the most recent, non-deprecated OS image in that family. For more information about when to use image families, see Image family best practices.

Additional settings:

  • If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. To use a Spot VM, add the "provisioningModel": "SPOT option to your request. For Spot VMs, the automatic restart and host maintenance options flags are disabled.

    "scheduling":
      {
        "provisioningModel": "SPOT"
      }
    
  • For G2 VMs, NVIDIA RTX Virtual Workstations (vWS) are supported. To specify a virtual workstation, add the guestAccelerators option to your request. Replace VWS_ACCELERATOR_COUNT with the number of virtual GPUs that you need.

    "guestAccelerators":
     [
       {
         "acceleratorCount": VWS_ACCELERATOR_COUNT,
         "acceleratorType": "projects/PROJECT_ID/zones/ZONEacceleratorTypes/nvidia-l4-vws"
       }
     ]
    

Limitations

A3 VMs

The following limitations apply to VMs that use A3 High and A3 Mega machine types:

  • You don't receive sustained use discounts and flexible committed use discounts for VMs that use A3 machine types.
  • You can only use A3 machine types in certain regions and zones.
  • You can't use regional persistent disks on VMs that use A3 machine types.
  • The A3 machine type is only available on the Sapphire Rapids platform.
  • If your VM uses an A3 machine type, you can't change the machine type. If you need to change the machine type, you must create a new VM.
  • You can't change the machine type of a VM to an A3 machine type. If you need a VM that uses an A3 machine type, you must create a new VM.
  • A3 machine types don't support sole-tenancy.
  • You can't run Windows operating systems on A3 machine types.
  • You can reserve A3 machine types only through certain reservations.

A2 Standard VMs

  • You don't receive sustained use discounts and flexible committed use discounts for VMs that use A2 Standard machine types.
  • You can only use A2 Standard machine types in certain regions and zones.
  • You can't use regional persistent disks on VMs that use A2 Standard machine types.
  • The A2 Standard machine type is only available on the Cascade Lake platform.
  • If your VM uses an A2 Standard machine type, you can only switch from one A2 Standard machine type to another A2 Standard machine type. You can't change to any other machine type. For more information, see Modify accelerator-optmized VMs.
  • You can't use the Windows operating system with A2 Standard machine types. When using Windows operating systems, choose a different A2 Standard machine type.
  • You can't do a quick format of the attached Local SSDs on Windows VMs that use A2 Standard machine types. To format these Local SSDs, you must do a full format by using the diskpart utility and specifying format fs=ntfs label=tmpfs.
  • A2 Standard machine types don't support sole-tenancy.

A2 Ultra VMs

  • You don't receive sustained use discounts and flexible committed use discounts for VMs that use A2 Ultra machine types.
  • You can only use A2 Ultra machine types in certain regions and zones.
  • You can't use regional persistent disks on VMs that use A2 Ultra machine types.
  • The A2 Ultra machine type is only available on the Cascade Lake platform.
  • If your VM uses an A2 Ultra machine type, you can't change the machine type. If you need to use a different A2 Ultra machine type, or any other machine type, you must create a new VM.
  • You can't change any other machine type to an A2 Ultra machine type. If you need a VM that uses an A2 Ultra machine type, you must create a new VM.
  • You can't do a quick format of the attached Local SSDs on Windows VMs that use A2 Ultra machine types. To format these Local SSDs, you must do a full format by using the diskpart utility and specifying format fs=ntfs label=tmpfs.

G2 VMs

  • You don't receive sustained use discounts and flexible committed use discounts for VMs that use G2 machine types.
  • You can only use G2 machine types in certain regions and zones.
  • You can't use regional persistent disks on VMs that use G2 machine types.
  • The G2 machine type is only available on the Cascade Lake platform.
  • Standard persistent disks (pd-standard) are not supported on VMs that use G2 standard machine types. For supported disk types, see Supported disk types for G2.
  • You can't create Multi-Instance GPUs on G2 machine types.
  • If you need to change the machine type of a G2 VM, review Modify accelerator-optmized VMs.
  • You can't use Deep Learning VM Images as boot disks for your VMs that use G2 machine types.
  • The current default driver for Container-Optimized OS doesn't support L4 GPUs running on G2 machine types. Container-Optimized OS also only support a select set of drivers. If you want to use Container-Optimized OS on G2 machine types, review the following notes:
    • Use a Container-Optimized OS version that supports the minimum recommended NVIDIA driver version 525.60.13 or later. For more information, review the Container-Optimized OS release notes.
    • When you install the driver, specify the latest available version that works for the L4 GPUs. For example, sudo cos-extensions install gpu -- -version=525.60.13.
  • You must use the Google Cloud CLI or REST to create G2 VMs for the following scenarios:
    • You want to specify custom memory values.
    • You want to customize the number of visible CPU cores.

Create groups of N1-general purpose VMs

You create a group of VMs with attached GPUs by using either the Google Cloud CLI, or REST.

This section describes how to create multiple VMs using the following GPU types:

NVIDIA GPUs:

  • NVIDIA T4: nvidia-tesla-t4
  • NVIDIA P4: nvidia-tesla-p4
  • NVIDIA P100: nvidia-tesla-p100
  • NVIDIA V100: nvidia-tesla-v100

NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

  • NVIDIA T4 Virtual Workstation: nvidia-tesla-t4-vws
  • NVIDIA P4 Virtual Workstation: nvidia-tesla-p4-vws
  • NVIDIA P100 Virtual Workstation: nvidia-tesla-p100-vws

    For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.

gcloud

To create a group of VMs, use the gcloud compute instances bulk create command. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

The following example creates two VMs with attached GPUs using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • VMs created in any zone in us-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM image pytorch-latest-gpu-v20211028-debian-10
gcloud compute instances bulk create \
    --name-pattern="my-test-vm-#" \
    --count=2 \
    --region=us-central1 \
    --machine-type=n1-standard-2 \
    --accelerator type=nvidia-tesla-t4,count=2 \
    --boot-disk-size=200 \
    --metadata="install-nvidia-driver=True" \
    --scopes="https://www.googleapis.com/auth/cloud-platform" \
    --image=pytorch-latest-gpu-v20211028-debian-10 \
    --image-project=deeplearning-platform-release \
    --on-host-maintenance=TERMINATE --restart-on-failure

If successful, the output is similar to the following:

NAME          ZONE
my-test-vm-1  us-central1-b
my-test-vm-2  us-central1-b
Bulk create request finished with status message: [VM instances created: 2, failed: 0.]

REST

Use the instances.bulkInsert method with the required parameters to create multiple VMs in a zone. For more information about the parameters and how to use this command, see Create VMs in bulk.

Example

The following example creates two VMs with attached GPUs using the following specifications:

  • VM names: my-test-vm-1, my-test-vm-2
  • VMs created in any zone in us-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by using the accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM image pytorch-latest-gpu-v20211028-debian-10

Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/region/us-central1/instances/bulkInsert

{
    "namePattern":"my-test-vm-#",
    "count":"2",
    "instanceProperties": {
      "machineType":"n1-standard-2",
      "disks":[
        {
          "type":"PERSISTENT",
          "initializeParams":{
            "diskSizeGb":"200",
            "sourceImage":"projects/deeplearning-platform-release/global/images/pytorch-latest-gpu-v20211028-debian-10"
          },
          "boot":true
        }
      ],
      "name": "default",
      "networkInterfaces":
      [
        {
          "network": "projects/PROJECT_ID/global/networks/default"
        }
      ],
      "guestAccelerators":
      [
        {
          "acceleratorCount": 2,
          "acceleratorType": "nvidia-tesla-t4"
        }
      ],
      "scheduling":{
        "onHostMaintenance":"TERMINATE",
        "automaticRestart":true
      },
      "metadata":{
        "items":[
          {
            "key":"install-nvidia-driver",
            "value":"True"
          }
        ]
      }
  }
 }

What's next?