GPUs on Compute Engine

Google Compute Engine provides graphics processing units (GPUs) that you can add to your virtual machine instances. You can use these GPUs to accelerate specific workloads on your instances such as machine learning and data processing.

To learn how to add GPUs to your instances, read adding GPUs to Instances.


Compute Engine provides NVIDIA® Tesla® P100 and K80 GPUs for your instances in passthrough mode so that your virtual machine instances have direct control over the GPUs and their associated memory.

GPU models are available in the following stages:

  • NVIDIA® Tesla® P100: Beta
  • NVIDIA® Tesla® K80: Generally Available

GPU models released in Beta are not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes. Additionally, GPU models released in Beta might not always be available in combination with other Compute Engine features, such as Local SSDs.

You can add GPUs to any non-shared-core predefined machine type or custom machine type that you are able to create in a zone. However, instances with lower numbers of GPUs are limited to a maximum number of vCPUs and system memory. In general, higher numbers of GPUs allow you to create instances with higher numbers of vCPUs and system memory. GPU instances are limited to a maximum 208 GB of system memory.

GPU model GPUs GPU boards GPU memory Available vCPUs Available memory Available zones
NVIDIA® Tesla® P100 1 GPU 1 board 16 GB HBM2 1 - 16 vCPUs 1 - 104 GB
  • us-west1-b
  • us-east1-c
  • europe-west1-b
  • europe-west1-d
  • asia-east1-a
2 GPUs 2 boards 32 GB HBM2 1 - 32 vCPUs 1 - 208 GB
4 GPUs 4 boards 64 GB HBM2 1 - 64 vCPUs 1 - 208 GB
NVIDIA® Tesla® K80 1 GPU 1/2 board 12 GB GDDR5 1 - 8 vCPUs 1 - 52 GB
  • us-west1-b
  • us-east1-c
  • us-east1-d
  • europe-west1-b
  • europe-west1-d
  • asia-east1-a
  • asia-east1-b
2 GPUs 1 board 24 GB GDDR5 1 - 16 vCPUs 1 - 104 GB
4 GPUs 2 boards 48 GB GDDR5 1 - 32 vCPUs 1 - 208 GB
8 GPUs 4 boards 96 GB GDDR5 1 - 64 vCPUs 1 - 208 GB

GPU devices receive sustained use discounts similar to vCPUs. Read the Compute Engine pricing page to see hourly and monthly pricing for GPU devices.


Instances with GPUs have specific restrictions that make them behave differently than other instance types.

  • GPU devices are available to attach only to instances with the Broadwell CPU platform and a maximum of 64 vCPUs.

  • GPU instances are limited to a maximum 208 GB of system memory.

  • GPU instances must terminate for host maintenance events, but can automatically restart. These maintenance events typically occur once per week, but can occur more frequently when necessary. You must configure your workloads to handle these maintenance events cleanly. Specifically, long-running workloads like machine learning and high-performance computing (HPC) must handle the interruption of host maintenance events. Learn how to handle host maintenance events on instances with GPUs.

  • You must have GPU quota before you can create instances with GPUs. Check the quotas page to ensure that you have enough GPUs available in your project and request a quota increase. New projects and Free Trial accounts do not receive GPU quota by default.

  • Instances with one or more GPUs have a maximum number of vCPUs for each GPU that you add to the instance. For example, each NVIDIA® Tesla® K80 GPU allows you to have up to eight vCPUs and up to 52 GB of system memory in your instance machine type. To see the available vCPU and memory ranges for different GPU configurations, see the GPUs list.

  • You cannot attach GPUs to instances with shared-core machine types.

  • You cannot add GPUs to preemptible instances.

  • GPUs require device drivers in order to function properly. NVIDIA GPUs running on Google Compute Engine must use the NVIDIA 375.51 driver, the NVIDIA 384.66 driver, or greater. For most driver installs, you can obtain these drivers by installing the latest version of the NVIDIA CUDA Toolkit. For instructions on how to install drivers for instances with GPUs, see installing GPU drivers.

  • Instances with a specific attached GPU model are covered by the Google Compute Engine SLA only if that attached GPU model is available in more than one zone in the same region where the instance is located. For example, instances with NVIDIA® Tesla® K80 GPUs in the us-west1-b zone are not covered by the Google Compute Engine SLA because no other zone in the us-west1 region provides that specific GPU model.

  • Instances with NVIDIA® Tesla® P100 GPUs in europe-west1-d cannot use Local SSD devices.

What's next?

Send feedback about...

Compute Engine Documentation