Stay organized with collections
Save and categorize content based on your preferences.
This page provides background information on how GPUs work with
Dataflow, including information about prerequisites and supported
GPU types.
Using GPUs in Dataflow jobs lets you accelerate
some data processing tasks. GPUs can perform certain computations faster
than CPUs. These computations are usually numeric or linear algebra,
often used in image processing and machine learning use cases. The
extent of performance improvement varies by the use case, type of computation,
and amount of data processed.
Prerequisites for using GPUs in Dataflow
To use GPUs with your Dataflow job, you must use Runner v2.
Dataflow runs user code in worker VMs inside a Docker container.
These worker VMs run Container-Optimized OS.
For Dataflow jobs to use GPUs, you need the following prerequisites:
GPU drivers are installed on worker VMs and accessible to the Docker
container. For more information, see
Install GPU drivers.
The following table provides recommendations for which type of GPU to use for
different workloads. The examples in the table are suggestions only, and you
need to test in your own environment to determine the appropriate GPU type for
your workload.
For more detailed information about GPU memory size, feature availability, and
ideal workload types for different GPU models, see the
General comparison chart
on the GPU platforms page.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-26 UTC."],[[["\u003cp\u003eDataflow jobs using GPUs can accelerate data processing, especially for numeric or linear algebra computations like those in image processing and machine learning.\u003c/p\u003e\n"],["\u003cp\u003eUsing GPUs in Dataflow requires Dataflow Runner v2 and incurs charges detailed on the Dataflow pricing page.\u003c/p\u003e\n"],["\u003cp\u003ePrerequisites for GPU usage include having GPU drivers installed on worker VMs and GPU libraries installed in the custom container image.\u003c/p\u003e\n"],["\u003cp\u003eDataflow supports several NVIDIA GPU types, including L4, A100 (40 GB and 80 GB), Tesla T4, P4, V100, and P100, each suited for different workload sizes and types.\u003c/p\u003e\n"],["\u003cp\u003eThe boot disk size for GPU containers should be increased to at least 50 gigabytes to prevent running out of disk space, due to the large nature of these containers.\u003c/p\u003e\n"]]],[],null,["\u003cbr /\u003e\n\n| **Note:** The following considerations apply to this GA offering:\n|\n| - Jobs that use GPUs incur charges as specified in the Dataflow [pricing page](/dataflow/pricing).\n| - To use GPUs, your Dataflow job must use [Dataflow Runner v2](/dataflow/docs/runner-v2).\n\n\u003cbr /\u003e\n\nThis page provides background information on how GPUs work with\nDataflow, including information about prerequisites and supported\nGPU types.\n\nUsing GPUs in Dataflow jobs lets you accelerate\nsome data processing tasks. GPUs can perform certain computations faster\nthan CPUs. These computations are usually numeric or linear algebra,\noften used in image processing and machine learning use cases. The\nextent of performance improvement varies by the use case, type of computation,\nand amount of data processed.\n\nPrerequisites for using GPUs in Dataflow\n\n\n- To use GPUs with your Dataflow job, you must use Runner v2.\n- Dataflow runs user code in worker VMs inside a Docker container. These worker VMs run [Container-Optimized OS](/container-optimized-os/docs). For Dataflow jobs to use GPUs, you need the following prerequisites:\n - GPU drivers are installed on worker VMs and accessible to the Docker container. For more information, see [Install GPU drivers](/dataflow/docs/gpu/use-gpus#drivers).\n - GPU libraries required by your pipeline, such as [NVIDIA CUDA-X libraries](https://developer.nvidia.com/gpu-accelerated-libraries) or the [NVIDIA CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit), are installed in the custom container image. For more information, see [Configure your container image](/dataflow/docs/gpu/use-gpus#container-image).\n- Because GPU containers are typically large, to avoid [running out of disk space](/dataflow/docs/guides/common-errors#no-space-left), increase the default [boot disk size](/dataflow/docs/reference/pipeline-options#worker-level_options) to 50 gigabytes or more.\n\n\u003cbr /\u003e\n\nPricing\n\nJobs using GPUs incur charges as specified in the Dataflow\n[pricing page](/dataflow/pricing).\n\nAvailability\n\nThe following GPU types are supported with Dataflow:\n\n| GPU type | `worker_accelerator` string |\n|---------------------|-----------------------------|\n| NVIDIA® L4 | `nvidia-l4` |\n| NVIDIA® A100 40 GB | `nvidia-tesla-a100` |\n| NVIDIA® A100 80 GB | `nvidia-a100-80gb` |\n| NVIDIA® Tesla® T4 | `nvidia-tesla-t4` |\n| NVIDIA® Tesla® P4 | `nvidia-tesla-p4` |\n| NVIDIA® Tesla® V100 | `nvidia-tesla-v100` |\n| NVIDIA® Tesla® P100 | `nvidia-tesla-p100` |\n\nFor more information about each GPU type, including performance data, see\n[Compute Engine GPU platforms](/compute/docs/gpus).\n\nFor information about available regions and zones for GPUs, see\n[GPU regions and zones availability](/compute/docs/gpus/gpu-regions-zones)\nin the Compute Engine documentation.\n\nRecommended workloads\n\nThe following table provides recommendations for which type of GPU to use for\ndifferent workloads. The examples in the table are suggestions only, and you\nneed to test in your own environment to determine the appropriate GPU type for\nyour workload.\n\nFor more detailed information about GPU memory size, feature availability, and\nideal workload types for different GPU models, see the\n[General comparison chart](/compute/docs/gpus#general_comparison_chart)\non the GPU platforms page.\n\n| Workload | A100 | L4 | T4 |\n|------------------------|-------------|-------------|-------------|\n| Model fine tuning | Recommended | | |\n| Large model inference | Recommended | Recommended | |\n| Medium model inference | | Recommended | Recommended |\n| Small model inference | | Recommended | Recommended |\n\nWhat's next\n\n- See an example of a [developer workflow for building pipelines that use GPUs](/dataflow/docs/gpu/develop-with-gpus).\n- Learn how to [run an Apache Beam pipeline on Dataflow with GPUs](/dataflow/docs/gpu/use-gpus).\n- Work through [Processing Landsat satellite images with GPUs](/dataflow/docs/samples/satellite-images-gpus)."]]