GPUs on Compute Engine

Google Compute Engine provides graphics processing units (GPUs) that you can add to your virtual machine instances. You can use these GPUs to accelerate specific workloads on your instances such as machine learning and data processing.

To learn how to add GPUs to your instances, read adding GPUs to Instances.


Compute Engine provides NVIDIA® Tesla® P100 and K80 GPUs for your instances in passthrough mode so that your virtual machine instances have direct control over the GPUs and their associated memory.

GPU models are available in the following stages:

  • NVIDIA® Tesla® P100: Beta
  • NVIDIA® Tesla® K80: Generally Available

GPU models released in Beta are not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes. Additionally, GPU models released in Beta might not always be available in combination with other Compute Engine features, such as Local SSDs.

You can add GPUs to any non-shared-core predefined machine type or custom machine type that you are able to create in a zone. However, instances with lower numbers of GPUs are limited to a maximum number of vCPUs and system memory. In general, higher numbers of GPUs allow you to create instances with higher numbers of vCPUs and system memory. In the asia-east1-a and us-east1-d zones, you can create GPU instances with up to 416 GB of memory. In other zones, GPU instances are limited to a maximum 208 GB of system memory.

GPU model GPUs GPU boards GPU memory Available vCPUs Available memory Available zones
NVIDIA® Tesla® P100 1 GPU 1 board 16 GB HBM2 1 - 16 vCPUs 1 - 104 GB
  • us-west1-b
  • us-central1-c
  • us-central1-f
  • us-east1-c
  • europe-west1-b
  • europe-west1-d
  • asia-east1-a
2 GPUs 2 boards 32 GB HBM2 1 - 32 vCPUs 1 - 208 GB
4 GPUs 4 boards 64 GB HBM2 1 - 64 vCPUs 1 - 208 GB
NVIDIA® Tesla® K80 1 GPU 1/2 board 12 GB GDDR5 1 - 8 vCPUs 1 - 52 GB
  • us-west1-b
  • us-central1-c
  • us-east1-c
  • us-east1-d
  • europe-west1-b
  • europe-west1-d
  • asia-east1-a
  • asia-east1-b
2 GPUs 1 board 24 GB GDDR5 1 - 16 vCPUs 1 - 104 GB
4 GPUs 2 boards 48 GB GDDR5 1 - 32 vCPUs 1 - 208 GB
8 GPUs 4 boards 96 GB GDDR5 1 - 64 vCPUs

1 - 416 GB (asia-east1-a and us-east1-d)

1 - 208 GB (all other zones)

GPU devices receive sustained use discounts similar to vCPUs. Read the Compute Engine pricing page to see hourly and monthly pricing for GPU devices.

GPUs on preemptible instances

You can add GPUs to your preemptible VM instances at lower preemptible prices for the GPUs. GPUs attached to preemptible instances work like normal GPUs but persist only for the life of the instance. When you add a GPU to a preemptible instance, you use your regular GPU quota. If you need a separate quota for preemptible GPUs, request a separate Preemptible GPU quota.

During maintenance events, preemptible instances with GPUs are preempted by default and cannot be automatically restarted. If you want to recreate your instances after they have been preempted, use a managed instance group. Managed instance groups recreate your instances if the vCPU, memory, and GPU resources are available.

If you want a warning before your instances is preempted, or want to configure your instance to automatically restart after a maintenance event, use a non-preemptible instance with a GPU. For non-preemptible instances with GPUs, Google provides one hour advance notice before preemption.

For steps to automatically restart a non-preemptible instance, see Updating options for an instance.

To learn how to create preemptible instances with GPUs attached, read Creating an instance with a GPU.


Instances with GPUs have specific restrictions that make them behave differently than other instance types.

  • GPU devices are available to attach only to instances with the Broadwell CPU platform and a maximum of 64 vCPUs.

  • In the asia-east1-a and us-east1-d zones, GPU instances are limited to a maximum of 416 GB of memory. In other zones, GPU instances are limited to a maximum 208 GB of system memory.

  • GPU instances must terminate for host maintenance events, but can automatically restart. These maintenance events typically occur once per week, but can occur more frequently when necessary. You must configure your workloads to handle these maintenance events cleanly. Specifically, long-running workloads like machine learning and high-performance computing (HPC) must handle the interruption of host maintenance events. Learn how to handle host maintenance events on instances with GPUs.

  • You must have GPU quota before you can create instances with GPUs. Check the quotas page to ensure that you have enough GPUs available in your project and request a quota increase. New projects and Free Trial accounts do not receive GPU quota by default.

  • Instances with one or more GPUs have a maximum number of vCPUs for each GPU that you add to the instance. For example, each NVIDIA® Tesla® K80 GPU allows you to have up to eight vCPUs and up to 52 GB of system memory in your instance machine type. To see the available vCPU and memory ranges for different GPU configurations, see the GPUs list.

  • You cannot attach GPUs to instances with shared-core machine types.

  • GPUs require device drivers in order to function properly. NVIDIA GPUs running on Google Compute Engine must use the following driver versions:

    • Linux instances:
      • R384 branch: NVIDIA 384.111 driver or greater
      • R390 branch: Not yet available
    • Windows Server instances:
      • R384 branch: NVIDIA 386.07 driver or greater
      • R390 branch: Not yet available
  • Instances with a specific attached GPU model are covered by the Google Compute Engine SLA only if that attached GPU model is available in more than one zone in the same region where the instance is located. The Google Compute Engine SLA does not cover specific GPU models in the following zones:

    • NVIDIA® Tesla® P100:
      • Released in Beta with no SLA in any zones.
    • NVIDIA® Tesla® K80:
      • us-west1-b
      • us-central1-c
  • Instances with NVIDIA® Tesla® P100 GPUs in europe-west1-d cannot use Local SSD devices.

What's next?

Send feedback about...

Compute Engine Documentation