Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
En esta página, se explica cómo usar VMs de Arm como trabajadores para los trabajos de Dataflow y de transmisión por lotes.
Puedes usar la serie de máquinas Tau T2A y la serie de máquinas C4A (versión preliminar) de procesadores Arm para ejecutar trabajos de Dataflow. Debido a que la arquitectura de Arm está optimizada para la eficiencia energética, el uso de estas VMs genera un mejor precio por el rendimiento de algunas cargas de trabajo. Para obtener más información sobre las VMs de Arm, consulta VMs de Arm en Compute.
Requisitos
Los siguientes SDK de Apache Beam admiten VMs de Arm:
SDK de Apache Beam para Java, versiones 2.50.0 o posteriores
SDK de Apache Beam para Python, versiones 2.50.0 o posteriores
SDK de Apache Beam para Go, versiones 2.50.0 o posteriores
Selecciona una región en la que estén disponibles las máquinas Tau T2A o C4A. Para obtener más información, consulta Regiones y zonas disponibles.
Usa imágenes de contenedor de varias arquitecturas
Si usas un contenedor personalizado en Dataflow, el contenedor debe coincidir con la arquitectura de las VMs de trabajador. Si planeas usar un contenedor
personalizado en VMs de ARM, te recomendamos compilar una imagen de varias arquitecturas. Para obtener más
información, consulta
Compila una imagen de contenedor de varias arquitecturas.
Precios
Se te cobra por los recursos de procesamiento de Dataflow.
Los precios de Dataflow son independientes de la familia de los tipos de máquinas. Para obtener más información, consulta Precios de Dataflow.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[[["\u003cp\u003eArm VMs, including Tau T2A and C4A machine series, can be used as workers for Dataflow batch and streaming jobs, offering improved price-performance for certain workloads due to their power efficiency.\u003c/p\u003e\n"],["\u003cp\u003eArm VM support requires specific Apache Beam SDK versions (2.50.0 or later for Java, Python, and Go), availability in select regions, use of Runner v2, and Streaming Engine for streaming jobs.\u003c/p\u003e\n"],["\u003cp\u003eRunning Dataflow jobs on Arm VMs requires setting the \u003ccode\u003eworkerMachineType\u003c/code\u003e (Java) or \u003ccode\u003emachine_type\u003c/code\u003e/\u003ccode\u003eworker_machine_type\u003c/code\u003e (Python/Go) pipeline option and specifying an ARM machine type.\u003c/p\u003e\n"],["\u003cp\u003eThere are several limitations to consider, such as unsupported GPUs, Cloud Profiler, Dataflow Prime, worker VM metrics, and container image pre-building, in addition to the limitations that also apply to Tau T2A and C4A machines.\u003c/p\u003e\n"],["\u003cp\u003eUsing custom containers require multi-architecture images, to ensure they match the architecture of the worker VMs.\u003c/p\u003e\n"]]],[],null,["# Use Arm VMs on Dataflow\n\nThis page explains how to use Arm VMs as workers for batch and streaming\nDataflow jobs.\n\nYou can use the\n[Tau T2A machine series](/compute/docs/general-purpose-machines#t2a_machines)\nand [C4A machine series](/compute/docs/general-purpose-machines#c4a_series)\n([Preview](/products#product-launch-stages)) of\nArm processors to run Dataflow jobs. Because Arm architecture is\noptimized for power efficiency, using these VMs yields better price for\nperformance for some workloads. For more information about Arm VMs, see\n[Arm VMs on Compute](/compute/docs/instances/arm-on-compute).\n\nRequirements\n------------\n\n- The following Apache Beam SDKs support Arm VMs:\n - Apache Beam Java SDK versions 2.50.0 or later\n - Apache Beam Python SDK versions 2.50.0 or later\n - Apache Beam Go SDK versions 2.50.0 or later\n- Select a region where Tau T2A or C4A machines are available. For more information, see [Available regions and zones](/compute/docs/regions-zones#available).\n- Use [Runner v2](/dataflow/docs/runner-v2) to run the job.\n- Streaming jobs must use [Streaming Engine](/dataflow/docs/streaming-engine).\n\nLimitations\n-----------\n\n- All [Tau T2A limitations](/compute/docs/general-purpose-machines#t2a_limitations) and [C4A limitations](/compute/docs/general-purpose-machines#supported_disk_types_for_c4a) apply.\n- [GPUs](/dataflow/docs/gpu) are not supported.\n- [Cloud Profiler](/dataflow/docs/guides/profiling-a-pipeline) is not supported.\n- [Dataflow Prime](/dataflow/docs/guides/enable-dataflow-prime) is not supported.\n- Receiving worker VM metrics from [Cloud Monitoring](/dataflow/docs/guides/using-cloud-monitoring#receive_worker_vm_metrics_from_the_agent) is not supported.\n- [Container image pre-building](/dataflow/docs/guides/build-container-image#prebuild) is not supported.\n\nRun a job using Arm VMs\n-----------------------\n\nTo use Arm VMs, set the following pipeline option. \n\n### Java\n\nSet the `workerMachineType` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\n### Python\n\nSet the `machine_type` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\n### Go\n\nSet the `worker_machine_type` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\nUse multi-architecture container images\n---------------------------------------\n\nIf you use a custom container in Dataflow, the container must\nmatch the architecture of the worker VMs. If you plan to use a custom\ncontainer on ARM VMs, we recommend building a multi-architecture image. For more\ninformation, see\n[Build a multi-architecture container image](/dataflow/docs/guides/multi-architecture-container).\n\nPricing\n-------\n\nYou are billed for Dataflow compute resources.\nDataflow pricing is independent of the machine type family. For\nmore information, see [Dataflow pricing](/dataflow/pricing).\n\nWhat's next\n-----------\n\n- [Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options)\n- [Use custom containers in Dataflow](/dataflow/docs/guides/using-custom-containers)"]]