About GPUs in GKE


This page describes GPUs in Google Kubernetes Engine (GKE), including use cases, supported features and GPU types, and the differences between Autopilot and Standard modes. For instructions on how to attach and use GPUs in your workloads, refer to Deploy GPU workloads on Autopilot or Run GPUs on Standard node pools.

GPU availability in GKE

In GKE Autopilot and Standard, you can attach GPU hardware to nodes in your clusters, and then allocate GPU resources to containerized workloads running on those nodes. You can use these accelerators to perform resource-intensive tasks, such as the following:

  • Machine learning (ML) inference and training
  • Large-scale data processing

The GPU hardware that's available for use in GKE is a subset of the Compute Engine GPUs for compute workloads. GKE offers some GPU-specific features, such as time-sharing and multi-instance GPUs, that can improve the efficiency with which your workloads use the GPU resources on your nodes.

The specific hardware that's available depends on the Compute Engine region or zone of your cluster. For specific availability, refer to GPU regions and zones.

GPU quota

Your GPU quota is the maximum number of GPUs that can run in your Google Cloud project. To use GPUs in your GKE clusters, your project must have enough GPU quota.

Your GPU quota should be at least equal to the total number of GPUs you intend to run in your cluster. If you enable cluster autoscaling, you should request GPU quota at least equivalent to your cluster's maximum number of nodes multiplied by the number of GPUs per node.

For example, if you create a cluster with three nodes that runs two GPUs per node, your project requires at least six GPU quota.

To request additional GPU quota, follow the instructions in Requesting a higher quota limit, using gpus as the metric.

GPU support in Autopilot and Standard

GPUs are available in Autopilot and Standard clusters. The following table describes the differences between Autopilot and Standard GPU support:

Description Autopilot Standard
GPU hardware availability
  • NVIDIA T4
  • NVIDIA A100
All GPU types that are supported by Compute Engine
Selecting a GPU You request a GPU quantity and type in your Pod specification. Autopilot installs stable drivers and manages nodes.
  1. Create a node pool with the specific GPU type and corresponding Compute Engine machine type and choose a driver to install
  2. Manually install GPU drivers on the nodes if you didn't use automatic installation
  3. Request GPU quantities in Pod specification

For instructions, refer to Run GPUs on Standard node pools

Additional GPU features N/A
Pricing Autopilot GPU Pod pricing Compute Engine GPU pricing

In Autopilot, GKE manages driver installation, node scaling, Pod isolation, and node provisioning. We recommend choosing a cluster mode for your GPUs based on the flexibility and level of control you want over your nodes, as follows:

  • If you want to focus on deploying your GPU-based workloads without needing to manage the nodes, and if the available GPU types suit your needs, use Autopilot.
  • If you prefer to manage your nodes, driver versions, scaling, isolation, and underlying machines yourself, or if you require features such as time-sharing and multi-instance GPUs, use Standard.

GPU features in GKE

GKE provides additional features that you can use to optimize the resource usage of your GPU workloads, so that you aren't wasting GPU resources on your nodes. By default, Kubernetes only supports assigning GPUs as whole units to containers, even if a container only needs a fraction of the available GPU, or if the container doesn't always use the resources.

The following features are available in GKE to reduce the amount of underutilized GPU resources:

GPU features
Time-sharing GPUs

Available on: Standard

Present a single GPU as multiple units to multiple containers on a node. The GPU driver context-switches and allocates the full GPU resources to each assigned container as needed over time.

Multi-instance GPUs

Available on: Standard

Split a single GPU into up to seven hardware-separated instances that can be assigned as individual GPUs to containers on a node. Each assigned container gets the resources available to that instance.

About the NVIDIA CUDA-X libraries

CUDA is NVIDIA's parallel computing platform and programming model for GPUs. To use CUDA applications, the image that you use must have the libraries. To add the NVIDIA CUDA-X libraries, use any of the following methods:

  • Recommended: Use an image with the NVIDIA CUDA-X libraries pre-installed. For example, you can use Deep Learning Containers. These containers pre-install the key data science frameworks, the NVIDIA CUDA-X libraries, and tools. Alternatively, the NVIDIA CUDA image contains only the NVIDIA CUDA-X libraries.
  • Build and use your own image. In this case, include the following values in the LD_LIBRARY_PATH environment variable in your container specification:
    1. /usr/local/cuda-CUDA_VERSION/lib64: the location of the NVIDIA CUDA-X libraries on the node. Replace CUDA_VERSION with the CUDA-X image version that you used. Some versions also contain debug utilities in /usr/local/nvidia/bin. For details, see the NVIDIA CUDA image on DockerHub.
    2. /usr/local/nvidia/lib64: the location of the NVIDIA device drivers.

To check the minimum GPU driver version required for your version of CUDA, see CUDA Toolkit and Compatible Driver Versions. Ensure that the GKE patch version running on your nodes includes a GPU driver version that's compatible with your chosen CUDA version. For a list of GPU driver versions associated with GKE version, refer to the corresponding Container-Optimized OS page linked in the GKE current versions table.

In Autopilot clusters, GKE manages the driver version selection and installation.

Monitor GPU nodes

If your GKE cluster has system metrics enabled, then the following metrics are available in Cloud Monitoring to monitor your GPU workload performance:

  • Duty Cycle (container/accelerator/duty_cycle): Percentage of time over the past sample period (10 seconds) during which the accelerator was actively processing. Between 1 and 100.
  • Memory Usage (container/accelerator/memory_used): Amount of accelerator memory allocated in bytes.
  • Memory Capacity (container/accelerator/memory_total): Total accelerator memory in bytes.

You can use predefined dashboards to monitor your clusters with GPU nodes. For more information, see View observability metrics. For general information about monitoring your clusters and their resources, refer to Observability for GKE.

View usage metrics for workloads

You view your workload GPU usage metrics from the Workloads dashboard in the Google Cloud console.

To view your workload GPU usage, perform the following steps:

  1. Go to the Workloads page in the Google Cloud console.

    Go to Workloads
  2. Select a workload.

The Workloads dashboard displays charts for GPU memory usage and capacity, and GPU duty cycle.

What's next