Create an N1 VM that has attached GPUs


This document explains how to create a VM that has attached GPUs and uses an N1 machine family. You can use most N1 machine types except the N1 shared-core.

Before you begin

  • If you want to use the command-line examples in this guide, do the following:
    1. Install or update to the latest version of the Google Cloud CLI.
    2. Set a default region and zone.
  • If you want to use the API examples in this guide, set up API access.
  • To review additional prerequisite steps such as selecting an OS image and checking GPU quota, review the overview document.

Overview

The following GPU models can be attached to VMs that use N1 machine families.

NVIDIA GPUs:

  • NVIDIA T4: nvidia-tesla-t4
  • NVIDIA P4: nvidia-tesla-p4
  • NVIDIA P100: nvidia-tesla-p100
  • NVIDIA V100: nvidia-tesla-v100
  • NVIDIA K80: nvidia-tesla-k80. See NVIDIA K80 EOL.

NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

  • NVIDIA T4 Virtual Workstation: nvidia-tesla-t4-vws
  • NVIDIA P4 Virtual Workstation: nvidia-tesla-p4-vws
  • NVIDIA P100 Virtual Workstation: nvidia-tesla-p100-vws

    For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.

Create a VM that has attached GPUs

You can create an N1 VM that has attached GPUs by using either the Google Cloud console, Google Cloud CLI, or the Compute Engine API.

Console

  1. In the Google Cloud console, go to the Create an instance page.

    Go to Create an instance

  2. Specify a Name for your VM. See Resource naming convention.

  3. Select a region and zone where GPUs are available. See the list of available GPU zones.

  4. In the Machine configuration section, select the GPUs machine family, and then do the following:

    1. In the GPU type list, select one of the GPU models supported on N1 machines.
    2. In the Number of GPUs list, select the number of GPUs.
    3. If your GPU model supports NVIDIA RTX Virtual Workstations (vWS) for graphics workloads, and you plan on running graphics-intensive workloads on this VM, select Enable Virtual Workstation (NVIDIA GRID).

    4. In the Machine type list, select one of the preset N1 machine types. Alternatively, you can also specify custom machine type settings.

  5. In the Boot disk section, click Change. This opens the Boot disk configuration page.

  6. On the Boot disk configuration page, do the following:

    1. On the Public images tab, choose a supported Compute Engine image or Deep Learning VM Images.
    2. Specify a boot disk size of at least 40 GB.
    3. To confirm your boot disk options, click Select.
  7. Expand the Advanced options section.

    1. Expand the Management section.
    2. In the On host maintenance list, select Terminate VM instance. VMs with attached GPUs can't live migrate. See Handle GPU host events.
    3. In the Automatic restart list, select On.
  8. Configure any other VM settings that you require. For example, you can change the Preemptibility settings to configure your VM as a preemptible instance. This reduces the cost of your VM and the attached GPUs. For more information, see GPUs on preemptible instances.

  9. To create and start the VM, click Create.

gcloud

To create and start a VM use the gcloud compute instances create command with the following flags.

The --preemptible is an optional flag that configures your VM as a preemptible instance. This reduces the cost of your VM and the attached GPUs. For more information, see GPUs on preemptible instances.

gcloud compute instances create VM_NAME \
    --machine-type MACHINE_TYPE \
    --zone ZONE \
    --boot-disk-size DISK_SIZE \
    --accelerator type=ACCELERATOR_TYPE,count=ACCELERATOR_COUNT \
    [--image IMAGE | --image-family IMAGE_FAMILY] \
    --image-project IMAGE_PROJECT \
    --maintenance-policy TERMINATE --restart-on-failure \
    [--preemptible]

Replace the following:

  • VM_NAME: the name for the new VM.
  • MACHINE_TYPE: the machine type that you selected for your VM.
  • ZONE: the zone for the VM. This zone must support the GPU type.
  • DISK_SIZE: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.
  • IMAGE or IMAGE_FAMILY that supports GPUs. Specify one of the following:

    • IMAGE: the required version of a public image. For example, --image debian-10-buster-v20200309.
    • IMAGE_FAMILY: an image family. This creates the VM from the most recent, non-deprecated OS image. For example, if you specify --image-family debian-10, Compute Engine creates a VM from the latest version of the OS image in the Debian 10 image family.

    You can also specify a custom image or Deep Learning VM Images.

  • IMAGE_PROJECT: the Compute Engine image project that the image family belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.

  • ACCELERATOR_COUNT: the number of GPUs that you want to add to your VM. See GPUs on Compute Engine for a list of GPU limits based on the machine type of your VM.

  • ACCELERATOR_TYPE: the GPU model that you want to use. If you plan on running graphics-intensive workloads on this VM, use one of the virtual workstation models.

    Choose one of the following values:

    • NVIDIA GPUs:

      • NVIDIA T4: nvidia-tesla-t4
      • NVIDIA P4: nvidia-tesla-p4
      • NVIDIA P100: nvidia-tesla-p100
      • NVIDIA V100: nvidia-tesla-v100
      • NVIDIA K80: nvidia-tesla-k80. See NVIDIA K80 EOL.
    • NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

      • NVIDIA T4 Virtual Workstation: nvidia-tesla-t4-vws
      • NVIDIA P4 Virtual Workstation: nvidia-tesla-p4-vws
      • NVIDIA P100 Virtual Workstation: nvidia-tesla-p100-vws

        For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.

Example

For example, you can use the following gcloud command to start an Ubuntu 22.04 VM with 1 NVIDIA T4 GPU and 2 vCPUs in the us-east1-d zone.

gcloud compute instances create gpu-instance-1 \
    --machine-type n1-standard-2 \
    --zone us-east1-d \
    --boot-disk-size 40GB \
    --accelerator type=nvidia-tesla-t4,count=1 \
    --image-family ubuntu-2204-lts \
    --image-project ubuntu-os-cloud \
    --maintenance-policy TERMINATE --restart-on-failure

API

Identify the GPU type that you want to add to your VM. Submit a GET request to list the GPU types that are available to your project in a specific zone.

The "preemptible": true is an optional parameter that configures your VM as a preemptible instance. This reduces the cost of your VM and the attached GPUs. For more information, see GPUs on preemptible instances.

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/acceleratorTypes

Replace the following:

  • PROJECT_ID: project ID.
  • ZONE: zone from which you want to list the available GPU types.

In the API, create a POST request to the instances.insert method. Include the acceleratorType parameter to specify which GPU type you want to use, and include the acceleratorCount parameter to specify how many GPUs you want to add. Also set the onHostMaintenance parameter to TERMINATE.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
{
  "machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE",
  "disks":
  [
    {
      "type": "PERSISTENT",
      "initializeParams":
      {
        "diskSizeGb": "DISK_SIZE",
        "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY"
      },
      "boot": true
    }
  ],
  "name": "VM_NAME",
  "networkInterfaces":
  [
    {
      "network": "projects/PROJECT_ID/global/networks/NETWORK"
    }
  ],
  "guestAccelerators":
  [
    {
      "acceleratorCount": ACCELERATOR_COUNT,
      "acceleratorType": "projects/PROJECT_ID/zones/ZONE/acceleratorTypes/ACCELERATOR_TYPE"
    }
  ],
  "scheduling":
  {
    "onHostMaintenance": "terminate",
    "automaticRestart": true,
    ["preemptible": true]
  },
}

Replace the following:

  • VM_NAME: the name of the VM.
  • PROJECT_ID: your project ID.
  • ZONE: the zone for the VM. This zone must support the GPU type.
  • MACHINE_TYPE: the machine type that you selected for the VM. See GPUs on Compute Engine to see what machine types are available based on your desired GPU count.
  • IMAGE or IMAGE_FAMILY: specify one of the following:

    • IMAGE: the required version of a public image. For example, "sourceImage": "projects/debian-cloud/global/images/debian-10-buster-v20200309"
    • IMAGE_FAMILY: an image family. This creates the VM from the most recent, non-deprecated OS image. For example, if you specify "sourceImage": "projects/debian-cloud/global/images/family/debian-10", Compute Engine creates a VM from the latest version of the OS image in the Debian 10 image family.

    You can also specify a custom image or Deep Learning VM Images.

  • IMAGE_PROJECT: the Compute Engine image project that the image family belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.

  • DISK_SIZE: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.

  • NETWORK: the VPCnetwork that you want to use for the VM. You can specify default to use your default network.

  • ACCELERATOR_COUNT: the number of GPUs that you want to add to your VM. See GPUs on Compute Engine for a list of GPU limits based on the machine type of your VM.

  • ACCELERATOR_TYPE: the GPU model that you want to use. If you plan on running graphics-intensive workloads on this VM, use one of the virtual workstation models.

    Choose one of the following values:

    • NVIDIA GPUs:

      • NVIDIA T4: nvidia-tesla-t4
      • NVIDIA P4: nvidia-tesla-p4
      • NVIDIA P100: nvidia-tesla-p100
      • NVIDIA V100: nvidia-tesla-v100
      • NVIDIA K80: nvidia-tesla-k80. See NVIDIA K80 EOL.
    • NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

      • NVIDIA T4 Virtual Workstation: nvidia-tesla-t4-vws
      • NVIDIA P4 Virtual Workstation: nvidia-tesla-p4-vws
      • NVIDIA P100 Virtual Workstation: nvidia-tesla-p100-vws

        For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) license is automatically added to your VM.

Install drivers

To install the drivers, choose one of the following options:

What's next?