Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
La pagina spiega come eseguire la pipeline Dataflow con il tipo di GPU NVIDIA® L4.
Il tipo di GPU L4 è utile per eseguire pipeline di inferenza di machine learning.
Requisiti
Utilizza la versione 2.46 o successiva dell'SDK Apache Beam. È consigliabile utilizzare Apache Beam 2.50 o versioni successive.
Devi disporre di una quota GPU L4 (NVIDIA_L4_GPUS) nella regione in cui viene eseguito il job.
Per ulteriori informazioni, consulta Quote GPU.
Il tipo di GPU L4 è disponibile solo con il tipo di macchina G2 ottimizzato per l'acceleratore.
Per ulteriori informazioni, consulta la serie di macchine G2.
Le pipeline che utilizzano il tipo di GPU L4 sono soggette alle
limitazioni standard di G2.
Il tipo di GPU NVIDIA L4 utilizza il driver NVIDIA versione 525.0 o successive e il
toolkit CUDA
versione 12.0 o successive. Qualsiasi codice utilizzato nella pipeline deve essere compatibile con la versione del driver NVIDIA e con la versione del toolkit CUDA. Ad esempio, se utilizzi PyTorch, devi utilizzare la versione 23.01 o successive.
GPU_COUNT: il numero di GPU da utilizzare. Ogni tipo di macchina G2 ha un numero fisso di GPU NVIDIA L4. Per trovare il numero corretto di GPU per il tuo tipo di macchina, consulta la colonna Numero di GPU nella tabella Tipi di macchine G2 standard.
Il seguente esempio di Dockerfile contiene dipendenze compatibili per una pipeline che utilizza il tipo di GPU NVIDIA L4.
RUN apt-get -y update
RUN apt-get install [system packages]
# Install the SDK.
RUN pip install --no-cache-dir apache-beam[gcp]==2.51.0
# Install the machine learning dependencies.
RUN pip install --no-cache-dir tensorflow[and-cuda]
RUN pip install xgboost
RUN pip install transformers accelerate
(etc…..)
# Verify that the image doesn't have conflicting dependencies.
RUN pip check
# Copy files from official SDK image, including the script and dependencies.
COPY --from=apache/beam_python3.10_sdk:2.51.0 /opt/apache/beam /opt/apache/beam
# Set the entrypoint to Apache Beam SDK launcher.
ENTRYPOINT ["/opt/apache/beam/boot"]
[[["Facile da capire","easyToUnderstand","thumb-up"],["Il problema è stato risolto","solvedMyProblem","thumb-up"],["Altra","otherUp","thumb-up"]],[["Difficile da capire","hardToUnderstand","thumb-down"],["Informazioni o codice di esempio errati","incorrectInformationOrSampleCode","thumb-down"],["Mancano le informazioni o gli esempi di cui ho bisogno","missingTheInformationSamplesINeed","thumb-down"],["Problema di traduzione","translationIssue","thumb-down"],["Altra","otherDown","thumb-down"]],["Ultimo aggiornamento 2025-09-04 UTC."],[[["\u003cp\u003eDataflow jobs using GPUs, like the NVIDIA L4, will incur charges as specified on the Dataflow pricing page and must use Dataflow Runner v2.\u003c/p\u003e\n"],["\u003cp\u003eTo utilize the NVIDIA L4 GPU type, Dataflow pipelines must use Apache Beam SDK version 2.46 or later, with 2.50 or later being recommended, and have the necessary L4 GPU quota.\u003c/p\u003e\n"],["\u003cp\u003eThe L4 GPU type is exclusive to the G2 accelerator-optimized machine type, and pipelines using it are subject to G2 standard limitations, which requires specification of the machine type and GPU count in pipeline options.\u003c/p\u003e\n"],["\u003cp\u003ePipelines using the NVIDIA L4 GPU type must be compatible with NVIDIA driver version 525.0 or later and CUDA toolkit version 12.0 or later, which can be ensured by using a custom container to manage dependencies.\u003c/p\u003e\n"],["\u003cp\u003eThe NVIDIA L4 GPU type is ideal for running machine learning inference pipelines on Dataflow, and a dockerfile is provided for users to implement it.\u003c/p\u003e\n"]]],[],null,["# Use the NVIDIA® L4 GPU type\n\n\u003cbr /\u003e\n\n| **Note:** The following considerations apply to this GA offering:\n|\n| - Jobs that use GPUs incur charges as specified in the Dataflow [pricing page](/dataflow/pricing).\n| - To use GPUs, your Dataflow job must use [Dataflow Runner v2](/dataflow/docs/runner-v2).\n\n\u003cbr /\u003e\n\nThe page explains how to run your Dataflow pipeline with the NVIDIA® L4 GPU type.\nThe L4 GPU type is useful for running machine learning inference pipelines.\n\nRequirements\n------------\n\n- Use the Apache Beam SDK version 2.46 or later. Apache Beam 2.50 or later is recommended.\n- You need L4 GPU quota (`NVIDIA_L4_GPUS`) in the region that your job runs in. For more information, see [GPU quotas](/compute/resource-usage#gpu_quota).\n- The L4 GPU type is available only with the G2 accelerator-optimized machine type. For more information, see [The G2 machine series](/compute/docs/accelerator-optimized-machines#g2-vms). Pipelines that use the L4 GPU type are subject to the [G2 standard limitations](/compute/docs/accelerator-optimized-machines#g2_standard_limitations).\n- The NVIDIA L4 GPU type uses the NVIDIA driver version 525.0 or later and the [CUDA toolkit](https://developer.nvidia.com/cuda-toolkit) version 12.0 or later. Any code that you use in your pipeline must be compatible with the NVIDIA driver version and CUDA toolkit version. For example, if you use PyTorch, you need to use PyTorch version 23.01 or later.\n\nRun pipelines with the NVIDIA® L4 GPU type\n------------------------------------------\n\nTo use the NVIDIA L4 GPU type, you need to include the following\n[pipeline options](/dataflow/docs/reference/pipeline-options) and\n[service options](/dataflow/docs/reference/service-options)\nin your pipeline code. \n\n### Java\n\n --workerMachineType=\u003cvar translate=\"no\"\u003eG2_MACHINE_TYPE\u003c/var\u003e\n --dataflowServiceOptions=\"worker_accelerator=type:nvidia-l4;count:\u003cvar translate=\"no\"\u003eGPU_COUNT\u003c/var\u003e;install-nvidia-driver\"\n\n### Python\n\n --machine_type=\u003cvar translate=\"no\"\u003eG2_MACHINE_TYPE\u003c/var\u003e\n --dataflow_service_options=\"worker_accelerator=type:nvidia-l4;count:\u003cvar translate=\"no\"\u003eGPU_COUNT\u003c/var\u003e;install-nvidia-driver\"\n\n### Go\n\n --worker_machine_type=\u003cvar translate=\"no\"\u003eG2_MACHINE_TYPE\u003c/var\u003e\n --dataflow_service_options=\"worker_accelerator=type:nvidia-l4;count:\u003cvar translate=\"no\"\u003eGPU_COUNT\u003c/var\u003e;install-nvidia-driver\"\n\nReplace the following values:\n\n- \u003cvar translate=\"no\"\u003eG2_MACHINE_TYPE\u003c/var\u003e: the [G2 machine type](/compute/docs/accelerator-optimized-machines#g2-standard-vms) to use\n- \u003cvar translate=\"no\"\u003eGPU_COUNT\u003c/var\u003e: The number of GPUs to use. Each G2 machine type has a fixed number of NVIDIA L4 GPUs. To find the correct number of GPUs for your machine type, see the **GPU count** column in the [G2 standard machine types](/compute/docs/accelerator-optimized-machines#g2-standard-vms) table.\n\nFor more information about running pipelines with\nGPUs, see [Run a pipeline with GPUs](/dataflow/docs/gpu/use-gpus).\n\nManage dependencies\n-------------------\n\nTo manage dependencies, use a custom container.\nFor more information, see\n[Use custom containers in Dataflow](/dataflow/docs/guides/using-custom-containers).\n\nThe following Dockerfile example contains compatible\ndependencies for a pipeline that uses the NVIDIA L4 GPU type. \n\n RUN apt-get -y update\n RUN apt-get install [system packages]\n\n # Install the SDK.\n RUN pip install --no-cache-dir apache-beam[gcp]==2.51.0\n # Install the machine learning dependencies.\n RUN pip install --no-cache-dir tensorflow[and-cuda]\n RUN pip install xgboost\n RUN pip install transformers accelerate\n (etc.....)\n # Verify that the image doesn't have conflicting dependencies.\n RUN pip check\n\n # Copy files from official SDK image, including the script and dependencies.\n COPY --from=apache/beam_python3.10_sdk:2.51.0 /opt/apache/beam /opt/apache/beam\n\n # Set the entrypoint to Apache Beam SDK launcher.\n ENTRYPOINT [\"/opt/apache/beam/boot\"]\n\nWhat's next\n-----------\n\n- Read about [best practices for working with Dataflow GPUs](/dataflow/docs/gpu/develop-with-gpus).\n- [Run a pipeline with GPUs](/dataflow/docs/gpu/use-gpus).\n- Learn more about [Dataflow ML](/dataflow/docs/machine-learning)."]]