This document explains how to create a VM that uses an accelerator-optimized machine family. The accelerator-optimized machine family is available in A2 standard and ultra machine types, and the G2 standard machine type.
Each accelerator-optimized machine type has a fixed number of a specific NVIDIA GPU model attached.
- For A2 accelerator-optimized machine types, NVIDIA A100 GPUs are attached. These are available in both A100 40GB and A100 80GB options.
- For G2 accelerator-optimized machine types, NVIDIA L4 GPUs are attached.
Before you begin
- If you want to use the command-line examples in this guide, do the following:
- Install or update to the latest version of the Google Cloud CLI.
- Set a default region and zone.
- If you want to use the API examples in this guide, set up API access.
- To review additional prerequisite steps such as selecting an OS image and checking GPU quota, review the overview document.
Create a VM that has attached GPUs
You can create an A2 or G2 accelerator-optimized VM by using either the Google Cloud console, Google Cloud CLI, or the Compute Engine API.
To make some customizations to your G2 VMs, you might need to use the Google Cloud CLI, or the Compute Engine API. See G2 standard limitations.
Console
In the Google Cloud console, go to the Create an instance page.
Specify a Name for your VM. See Resource naming convention.
Select a region and zone where GPUs are available. See the list of available GPU regions and zones.
In the Machine configuration section, select the GPUs machine family, and then do the following:
In the GPU type list, select your GPU type.
- For A2 accelerator-optimized VMs, select either
NVIDIA A100 40GB
orNVIDIA A100 80GB
. - For G2 accelerator-optimized VMs, select
NVIDIA L4
.
- For A2 accelerator-optimized VMs, select either
In the Number of GPUs list, select the number of GPUs.
If your GPU model supports NVIDIA RTX Virtual Workstations (vWS) for graphics workloads, and you plan on running graphics-intensive workloads on this VM, select Enable Virtual Workstation (NVIDIA GRID).
In the Boot disk section, click Change. This opens the Boot disk configuration page.
On the Boot disk configuration page, do the following:
- On the Public images tab, choose a supported Compute Engine image or Deep Learning VM Images.
- Specify a boot disk size of at least 40 GB.
- To confirm your boot disk options, click Select.
Configure any other VM settings that you require. For example, you can change the Preemptibility settings to configure your VM as a preemptible instance. This reduces the cost of your VM and the attached GPUs. For more information, see GPUs on preemptible instances.
To create and start the VM, click Create.
gcloud
To create and start a VM use the
gcloud compute instances create
command with the following flags. VMs with GPUs cannot live
migrate, make sure that you set the --maintenance-policy TERMINATE
flag.
The following optional flags are shown in the sample command:
The
--preemptible
flag which configures your VM as a preemptible instance. This reduces the cost of your VM and the attached GPUs. For more information, see GPUs on preemptible instances.The
--accelerator
flag to specify a virtual workstation. NVIDIA RTX Virtual Workstations (vWS) are supported for only G2 VMs.
gcloud compute instances create VM_NAME \ --machine-type=MACHINE_TYPE \ --zone=ZONE \ --boot-disk-size=DISK_SIZE \ --image=IMAGE \ --image-project=IMAGE_PROJECT \ --maintenance-policy=TERMINATE --restart-on-failure \ [--preemptible] \ [--accelerator=type=nvidia-l4-vws,count=VWS_ACCELERATOR_COUNT]
Replace the following:
VM_NAME
: the name for the new VM.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:- An A2 machine type.
- A G2 machine type. G2 machine types also
support custom memory. Memory must be a multiple of 1024 MB and
within the supported memory range. For example, to create a VM
with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
ZONE
: the zone for the VM. This zone must support your selected GPU model.DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.IMAGE
: an operating system image that supports GPUs.If you want to use the latest image in an image family, replace the
--image
flag with the--image-family
flag and set its value to an image family that supports GPUs. For example:--image-family=rocky-linux-8-optimized-gcp
.You can also specify a custom image or Deep Learning VM Images.
IMAGE_PROJECT
: the Compute Engine image project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.VWS_ACCELERATOR_COUNT
: the number of virtual GPUs that you need.
API
In the API, create a POST request to the
instances.insert
method.
VMs with GPUs cannot live migrate, make sure you set the onHostMaintenance
parameter to TERMINATE
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE", "disks": [ { "type": "PERSISTENT", "initializeParams": { "diskSizeGb": "DISK_SIZE", "sourceImage": "SOURCE_IMAGE_URI" }, "boot": true } ], "name": "VM_NAME", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/NETWORK" } ], "scheduling": { "onHostMaintenance": "terminate", "automaticRestart": true }, }
Replace the following:
VM_NAME
: the name of the VM.PROJECT_ID
: your project ID.ZONE
: the zone for the VM. This zone must support your selected GPU model.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:- An A2 machine type.
- A G2 machine type. G2 machine types also
support custom memory. Memory must be a multiple of 1024 MB and
within the supported memory range. For example, to create a VM
with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
SOURCE_IMAGE_URI
: the URI for the specific image or image family that you want to use.For example:
- Specific image:
"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
- Image family:
"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp"
.
When you specify an image family, Compute Engine creates a VM from the most recent, non-deprecated OS image in that family. For more information about when to use image families, see Image family best practices.
- Specific image:
DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.NETWORK
: the VPC network that you want to use for the VM. You can specifydefault
to use your default network.
Additional settings:
You can reduce the cost of your VM and the attached GPUs by using preemptible VMs. For more information, see GPUs on preemptible instances. To set the VM to be preemptible, add the
"preemptible": true
option to your request."scheduling": { "onHostMaintenance": "terminate", "automaticRestart": true, "preemptible": true }
For G2 VMs, NVIDIA RTX Virtual Workstations (vWS) are supported. To specify a virtual workstation, add the
guestAccelerators
option to your request. ReplaceVWS_ACCELERATOR_COUNT
with the number of virtual GPUs that you need."guestAccelerators": [ { "acceleratorCount": VWS_ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONEacceleratorTypes/nvidia-l4-vws" } ]
Limitations
A2 standard
- You don't receive sustained use discounts for VMs that use A2 standard machine types.
- You can only use A2 standard machine types in certain regions and zones.
- The A2 standard machine types is only available on the Cascade Lake platform.
- You can't use regional persistent disks on VMs that use A2 standard machine types.
- You can't use the
a2-megagpu-16g
A2 standard machine type on Windows operating systems. When using Windows operating systems, choose a different A2 standard machine type. - You can't do a quick format of the attached local SSDs on Windows VMs that use A2 standard
machine types. To format these local SSDs, you must do a full format by using the diskpart
utility and specifying
format fs=ntfs label=tmpfs
.
A2 ultra
- You don't receive sustained use discounts for VMs that use A2 ultra machine types.
- You can only use A2 ultra machine types in certain regions and zones.
- The A2 ultra machine types is only available on the Cascade Lake platform.
- You can't use regional persistent disks on VMs that use A2 ultra machine types.
- You can't do a quick format of the attached local SSDs on Windows VMs that use A2 ultra
machine types. To format these local SSDs, you must do a full format by using the diskpart
utility and specifying
format fs=ntfs label=tmpfs
. - You can't change the machine type for a VM that uses the A2 ultra machine family. If you need to use a different A2 ultra machine type, or any other machine family, you must create a new VM.
- You can't change any other machine type to an A2 ultra machine type. If you need to create a VM that uses an A2 ultra machine type, you must create a new VM.
G2 standard
- You don't receive sustained use discounts for VMs that use G2 standard machine types.
- You can only use G2 standard machine types in certain regions and zones.
- The G2 standard machine types is only available on the Cascade Lake platform.
- You can't use regional persistent disks on VMs that use G2 standard machine types.
- Standard persistent disks (
pd-standard
) are not supported on VMs that use G2 standard machine types. For supported disk types, see G2 standard VMs. - You can't create Multi-Instance GPUs on G2 standard machine types.
- You can't use Deep Learning VM Images as boot disks for your VMs that use G2 standard machine types.
- The current default driver for Container-Optimized OS, don't support L4 GPUs running on
G2 machine types. Container-Optimized OS also only support a select set of drivers.
If you want to use Container-Optimized OS on G2 machine types, review the following notes:
- Use a Container-Optimized OS version that supports the minimum recommended
NVIDIA driver version
525.60.13
or later. For more information, review the Container-Optimized OS release notes. - When you install the driver,
specify the latest available version that works for the L4 GPUs.
For example,
sudo cos-extensions install gpu -- -version=525.60.13
.
- Use a Container-Optimized OS version that supports the minimum recommended
NVIDIA driver version
- You need to use the Google Cloud CLI or the Compute Engine API to
create G2 VMs
for the following scenarios:
- You want to specify custom memory values.
- You want to customize the number of visible CPU cores.
Install drivers
For the VM to use the GPU, you need to Install the GPU driver on your VM.
Examples
In these examples, VMs are created by using the Google Cloud CLI. However, you can also use either the Google Cloud console or the Compute Engine API to create these VMs.
The following examples show how to create VMs using the following images:
- Public or custom image. This example uses a G2 VM.
- Deep Learning VM Images. This example uses an A2 VM.
Container-optimized (COS) image. This example uses an A2 VM.
Public OS image (G2)
You can create VMs that have attached GPUs that use either a public image that is available on Compute Engine or a custom image.
To create a VM using the most recent, non-deprecated image from the
Rocky Linux 8 optimized for Google Cloud image family
that uses the g2-standard-8
machine type and has an NVIDIA RTX Virtual Workstation, complete the
following steps:
Create the VM. In this example, optional flags such as boot disk type and size are also specified.
gcloud compute instances create VM_NAME \ --project=PROJECT_ID \ --zone=us-central1-a \ --machine-type=g2-standard-8 \ --maintenance-policy=TERMINATE --restart-on-failure \ --network-interface=nic-type=GVNIC \ --accelerator=type=nvidia-l4-vws,count=1 \ --image-family=rocky-linux-8-optimized-gcp \ --image-project=rocky-linux-cloud \ --boot-disk-size=200GB \ --boot-disk-type=pd-ssd
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.
Install NVIDIA driver and CUDA. For NVIDIA L4 GPUs, CUDA version XX or higher is required.
DLVM image (A2)
Using DLVM images is the easiest way to get started because these images already have the NVIDIA drivers and CUDA libraries pre-installed.
These images also provide performance optimizations.
The following DLVM images are supported for NVIDIA A100:
common-cu110
: NVIDIA driver and CUDA pre-installedtf-ent-1-15-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 1.15.3 pre-installedtf2-ent-2-1-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.1.1 pre-installedtf2-ent-2-3-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.3.1 pre-installedpytorch-1-6-cu110
: NVIDIA driver, CUDA, Pytorch 1.6
For more information about the DLVM images that are available, and the packages installed on the images, see the Deep Learning VM documentation.
Create a VM using the
tf2-ent-2-3-cu110
image and thea2-highgpu-1g
machine type. In this example, optional flags such as boot disk size and scope are specified.gcloud compute instances create VM_NAME \ --project PROJECT_ID \ --zone us-central1-c \ --machine-type a2-highgpu-1g \ --maintenance-policy TERMINATE --restart-on-failure \ --image-family tf2-ent-2-3-cu110 \ --image-project deeplearning-platform-release \ --boot-disk-size 200GB \ --metadata "install-nvidia-driver=True,proxy-mode=project_editors" \ --scopes https://www.googleapis.com/auth/cloud-platform
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.
The preceding example command also generates a Vertex AI Workbench user-managed notebooks instance for the VM. To access the notebook, in the Google Cloud console, go to the Vertex AI Workbench > User-managed notebooks page.
COS (A2)
You can create VMs that have attached GPUs by using Container-optimized (COS) images.
To create a VM using the cos-85-lts
image and
the a2-highgpu-1g
machine type, complete
the following steps from your local client. The following examples can be
run on a Mac or Linux client:
If one doesn't already exist, create a
/tmp
directory.mkdir /tmp
Add a configuration file
/cloud-init.yaml
to the/tmp
directory.This information is required to set up your Container-optimized VM and also installs NVIDIA driver and CUDA when the VM boots up.
cat <<'EOF' > /tmp/cloud-init.yaml #cloud-config write_files: - path: /etc/systemd/system/cos-gpu-installer.service permissions: 0755 owner: root content: | [Unit] Description=Run the GPU driver installer container Requires=network-online.target gcr-online.target After=network-online.target gcr-online.target [Service] User=root Type=oneshot RemainAfterExit=true Environment=INSTALL_DIR=/var/lib/nvidia ExecStartPre=/bin/mkdir -p ${INSTALL_DIR} ExecStartPre=/bin/mount --bind ${INSTALL_DIR} ${INSTALL_DIR} ExecStartPre=/bin/mount -o remount,exec ${INSTALL_DIR} ExecStart=/usr/bin/docker run --privileged \ --net=host \ --pid=host \ --volume ${INSTALL_DIR}:/usr/local/nvidia \ --volume /dev:/dev \ --volume /:/root \ --env NVIDIA_DRIVER_VERSION=450.80.02 \ gcr.io/cos-cloud/cos-gpu-installer:v20200701 StandardOutput=journal+console StandardError=journal+console runcmd: - systemctl daemon-reload - systemctl enable cos-gpu-installer.service - systemctl start cos-gpu-installer.service EOF
Create a Container-optimized VM using the
cos-85-lts
image family and thea2-highgpu-1g
.You need to provide the configuration file by using the
-metadata-from-file user-data
flag.In this example, the optional flag boot disk size is also specified.
gcloud compute instances create VM_NAME \ --project PROJECT_ID \ --zone us-central1-a \ --machine-type a2-highgpu-1g \ --maintenance-policy TERMINATE --restart-on-failure \ --image-family cos-85-lts \ --image-project cos-cloud \ --boot-disk-size 200GB \ --metadata-from-file user-data=/tmp/cloud-init.yaml
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.
After the VM is created, log in to the VM and run the following command to verify that the NVIDIA driver is installed.
/var/lib/nvidia/bin/nvidia-smi
It takes approximately 5 minutes for the driver to be installed.
Multi-Instance (A2 VMs only)
A Multi-Instance GPU partitions a single NVIDIA A100 GPU within the same VM into as many as seven independent GPU instances. They run simultaneously, each with its own memory, cache and streaming multiprocessors. This setup enables the A100 GPU to deliver guaranteed quality-of-service (QoS) at up to 7x higher utilization compared to earlier GPU models. With A100 40GB GPUs, each Multi-instance GPU instance can be allocated up to 5 GB of memory, and with the A100 80GB GPU's increased memory capacity, that size can be doubled to 10 GB.
For more information about using Multi-Instance GPUs, see NVIDIA Multi-Instance GPU User Guide.
To create Multi-Instance GPUs, complete the following steps:
Create a VM that has attached A100 GPUs.
Enable NVIDIA GPU drivers.
Enable Multi-Instance GPUs and reboot the VM.
sudo nvidia-smi -mig 1 sudo reboot
Review the Multi-Instance GPU shapes that are available.
sudo nvidia-smi mig --list-gpu-instance-profiles
The output is similar to the following:
+--------------------------------------------------------------------------+ | GPU instance profiles: | | GPU Name ID Instances Memory P2P SM DEC ENC | | Free/Total GiB CE JPEG OFA | |==========================================================================| | 0 MIG 1g.5gb 19 7/7 4.75 No 14 0 0 | | 1 0 0 | +--------------------------------------------------------------------------+ | 0 MIG 2g.10gb 14 3/3 9.75 No 28 1 0 | | 2 0 0 | +--------------------------------------------------------------------------+ | 0 MIG 3g.20gb 9 2/2 19.62 No 42 2 0 | | 3 0 0 | +--------------------------------------------------------------------------+ | 0 MIG 4g.20gb 5 1/1 19.62 No 56 2 0 | | 4 0 0 | +--------------------------------------------------------------------------+ | 0 MIG 7g.40gb 0 1/1 39.50 No 98 5 0 | | 7 1 1 | +--------------------------------------------------------------------------+
Create the Multi-Instance GPU (GI) and associated compute instances (CI) that you want. You can create these instances by specifying either the full or shortened profile name, profile ID, or a combination of both. For more information, see Creating GPU Instances.
The following example creates two
MIG 3g.20gb
GPU instances by using a combination of the shortened profile name (3g.20gb
), and the profile ID (9
).The
-C
flag is also specified which creates the associated compute instances for the required profile.sudo nvidia-smi mig -cgi 9,3g.20gb -C
The output is similar to the following:
Successfully created GPU instance ID 2 on GPU 0 using profile MIG 3g.20gb (ID 9) Successfully created compute instance ID 0 on GPU 0 GPU instance ID 2 using profile MIG 3g.20gb (ID 2) Successfully created GPU instance ID 1 on GPU 0 using profile MIG 3g.20gb (ID 9) Successfully created compute instance ID 0 on GPU 0 GPU instance ID 1 using profile MIG 3g.20gb (ID 2)
Check that the two Multi-Instance GPUs are created:
sudo nvidia-smi mig -lgi
The output is similar to the following:
+----------------------------------------------------+ | GPU instances: | | GPU Name Profile Instance Placement | | ID ID Start:Size | |====================================================| | 0 MIG 3g.20gb 9 1 0:4 | +----------------------------------------------------+ | 0 MIG 3g.20gb 9 2 4:4 | +----------------------------------------------------+
Check that both the GIs and corresponding CIs are created.
sudo nvidia-smi
The output is similar to the following:
Tue May 18 18:32:22 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | On | | N/A 43C P0 52W / 350W | 22MiB / 40537MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------+-----------+-----------------------+ | GPU GI CI MIG | Memory-Usage | Vol| Shared | | ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG| | | | ECC| | |==================+======================+===========+=======================| | 0 1 0 0 | 11MiB / 20096MiB | 42 0 | 3 0 2 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 2 0 1 | 11MiB / 20096MiB | 42 0 | 3 0 2 0 0 | | | 0MiB / 32767MiB | | | +------------------+----------------------+-----------+-----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
What's next?
- Learn more about GPU platforms.
- Add Local SSDs to your instances. Local SSD devices pair well with GPUs when your apps require high-performance storage.
- Install the GPU drivers.
- If you enabled an NVIDIA RTX virtual workstation, install a driver for the virtual workstation.
- To handle GPU host maintenance, see Handling GPU host maintenance events.