Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menjelaskan cara menggunakan VM Arm sebagai pekerja untuk tugas Dataflow streaming dan batch.
Anda dapat menggunakan
seri mesin Tau T2A
dan seri mesin C4A
(Pratinjau) dari
prosesor Arm untuk menjalankan tugas Dataflow. Karena arsitektur Arm dioptimalkan untuk efisiensi daya, penggunaan VM ini menghasilkan harga yang lebih baik untuk performa pada beberapa workload. Untuk informasi selengkapnya tentang VM Arm, lihat
VM Arm di Compute.
Persyaratan
Apache Beam SDK berikut mendukung VM Arm:
Apache Beam Java SDK versi 2.50.0 atau yang lebih baru
Apache Beam Python SDK versi 2.50.0 atau yang lebih baru
Apache Beam Go SDK versi 2.50.0 atau yang lebih baru
Pilih region tempat mesin Tau T2A atau C4A tersedia. Untuk informasi selengkapnya, lihat Region dan zona yang tersedia.
Jika Anda menggunakan penampung kustom di Dataflow, penampung harus cocok dengan arsitektur VM pekerja. Jika Anda berencana menggunakan penampung
kustom di VM ARM, sebaiknya build image multi-arsitektur. Untuk mengetahui informasi
selengkapnya, lihat
Mem-build image container multi-arsitektur.
Harga
Anda ditagih untuk resource komputasi Dataflow.
Harga Dataflow tidak bergantung pada kelompok jenis mesin. Untuk mengetahui informasi selengkapnya, lihat Harga Dataflow.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eArm VMs, including Tau T2A and C4A machine series, can be used as workers for Dataflow batch and streaming jobs, offering improved price-performance for certain workloads due to their power efficiency.\u003c/p\u003e\n"],["\u003cp\u003eArm VM support requires specific Apache Beam SDK versions (2.50.0 or later for Java, Python, and Go), availability in select regions, use of Runner v2, and Streaming Engine for streaming jobs.\u003c/p\u003e\n"],["\u003cp\u003eRunning Dataflow jobs on Arm VMs requires setting the \u003ccode\u003eworkerMachineType\u003c/code\u003e (Java) or \u003ccode\u003emachine_type\u003c/code\u003e/\u003ccode\u003eworker_machine_type\u003c/code\u003e (Python/Go) pipeline option and specifying an ARM machine type.\u003c/p\u003e\n"],["\u003cp\u003eThere are several limitations to consider, such as unsupported GPUs, Cloud Profiler, Dataflow Prime, worker VM metrics, and container image pre-building, in addition to the limitations that also apply to Tau T2A and C4A machines.\u003c/p\u003e\n"],["\u003cp\u003eUsing custom containers require multi-architecture images, to ensure they match the architecture of the worker VMs.\u003c/p\u003e\n"]]],[],null,["# Use Arm VMs on Dataflow\n\nThis page explains how to use Arm VMs as workers for batch and streaming\nDataflow jobs.\n\nYou can use the\n[Tau T2A machine series](/compute/docs/general-purpose-machines#t2a_machines)\nand [C4A machine series](/compute/docs/general-purpose-machines#c4a_series)\n([Preview](/products#product-launch-stages)) of\nArm processors to run Dataflow jobs. Because Arm architecture is\noptimized for power efficiency, using these VMs yields better price for\nperformance for some workloads. For more information about Arm VMs, see\n[Arm VMs on Compute](/compute/docs/instances/arm-on-compute).\n\nRequirements\n------------\n\n- The following Apache Beam SDKs support Arm VMs:\n - Apache Beam Java SDK versions 2.50.0 or later\n - Apache Beam Python SDK versions 2.50.0 or later\n - Apache Beam Go SDK versions 2.50.0 or later\n- Select a region where Tau T2A or C4A machines are available. For more information, see [Available regions and zones](/compute/docs/regions-zones#available).\n- Use [Runner v2](/dataflow/docs/runner-v2) to run the job.\n- Streaming jobs must use [Streaming Engine](/dataflow/docs/streaming-engine).\n\nLimitations\n-----------\n\n- All [Tau T2A limitations](/compute/docs/general-purpose-machines#t2a_limitations) and [C4A limitations](/compute/docs/general-purpose-machines#supported_disk_types_for_c4a) apply.\n- [GPUs](/dataflow/docs/gpu) are not supported.\n- [Cloud Profiler](/dataflow/docs/guides/profiling-a-pipeline) is not supported.\n- [Dataflow Prime](/dataflow/docs/guides/enable-dataflow-prime) is not supported.\n- Receiving worker VM metrics from [Cloud Monitoring](/dataflow/docs/guides/using-cloud-monitoring#receive_worker_vm_metrics_from_the_agent) is not supported.\n- [Container image pre-building](/dataflow/docs/guides/build-container-image#prebuild) is not supported.\n\nRun a job using Arm VMs\n-----------------------\n\nTo use Arm VMs, set the following pipeline option. \n\n### Java\n\nSet the `workerMachineType` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\n### Python\n\nSet the `machine_type` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\n### Go\n\nSet the `worker_machine_type` pipeline option and specify an\n[ARM machine type](/compute/docs/instances/arm-on-compute).\n\nFor more information about setting pipeline options, see\n[Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options).\n\nUse multi-architecture container images\n---------------------------------------\n\nIf you use a custom container in Dataflow, the container must\nmatch the architecture of the worker VMs. If you plan to use a custom\ncontainer on ARM VMs, we recommend building a multi-architecture image. For more\ninformation, see\n[Build a multi-architecture container image](/dataflow/docs/guides/multi-architecture-container).\n\nPricing\n-------\n\nYou are billed for Dataflow compute resources.\nDataflow pricing is independent of the machine type family. For\nmore information, see [Dataflow pricing](/dataflow/pricing).\n\nWhat's next\n-----------\n\n- [Set Dataflow pipeline options](/dataflow/docs/guides/setting-pipeline-options)\n- [Use custom containers in Dataflow](/dataflow/docs/guides/using-custom-containers)"]]