Google Cloud TPU 是 Google 定制设计的 AI 加速器,针对大型 AI 模型的训练和使用进行了优化。它们旨在针对各种 AI 工作负载进行经济高效的扩缩,并提供多种功能,以加速 AI 框架(包括 PyTorch、JAX 和 TensorFlow)上的推理工作负载。如需详细了解 TPU,请参阅 Google Cloud TPU 简介。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["| **Note:** The Dataflow TPU offering is generally available with an allowlist. To get access to this feature, reach out to your account team.\n\nGoogle Cloud TPUs are custom-designed AI accelerators created by Google that are\noptimized for training and using of large AI models. They are designed to\nscale cost-efficiently for a wide range of AI workloads and provide versatility\nto accelerate inference workloads on AI frameworks, including PyTorch, JAX, and\nTensorFlow. For more details about TPUs, see [Introduction to\nGoogle Cloud TPU](/tpu/docs/intro-to-tpu).\n\nPrerequisites for using TPUs in Dataflow\n\n- Your Google Cloud projects must be approved to use this GA offering.\n\nLimitations\n\nThis offering is subject to the following limitations:\n\n- **Only single-host TPU accelerators are supported**: The Dataflow TPU offering supports only single-host TPU configurations where each Dataflow worker manages one or many TPU devices that are not interconnected with TPUs managed by other workers.\n- **Only homogenous TPU worker pools are supported**: Features like Dataflow right fitting and Dataflow Prime don't support TPU workloads.\n\nPricing\n\nDataflow jobs that use TPUs are billed for worker TPU chip-hours\nconsumed and are not billed for worker CPU and memory. For more information, see\nthe Dataflow [pricing page](/dataflow/pricing).\n\nAvailability\n\nThe following TPU accelerators and processing regions are available.\n\nSupported TPU accelerators\n\nThe supported TPU accelerator combinations are identified by the tuple (TPU\ntype, TPU topology).\n\n- **TPU type** refers to the model of the TPU device.\n- **TPU topology** refers to the number and physical arrangement of the TPU chips in a slice.\n\nTo configure the type and topology of TPUs for Dataflow workers,\nuse the [`worker_accelerator` pipeline\noption](/dataflow/docs/reference/service-options) formatted as\n`type:TPU_TYPE;topology:TPU_TOPOLOGY`.\n\nThe following TPU configurations are supported with Dataflow:\n\n| TPU type | Topology | Required `worker_machine_type` |\n|----------------------|----------|--------------------------------|\n| tpu-v5-lite-podslice | 1x1 | ct5lp-hightpu-1t |\n| tpu-v5-lite-podslice | 2x2 | ct5lp-hightpu-4t |\n| tpu-v5-lite-podslice | 2x4 | ct5lp-hightpu-8t |\n| tpu-v6e-slice | 1x1 | ct6e-standard-1t |\n| tpu-v6e-slice | 2x2 | ct6e-standard-4t |\n| tpu-v6e-slice | 2x4 | ct6e-standard-8t |\n| tpu-v5p-slice | 2x2x1 | ct5p-hightpu-4t |\n\nRegions\n\nFor information about available regions and zones for TPUs, see [TPU regions and\nzones](/tpu/docs/regions-zones) in the Cloud TPU documentation.\n\nWhat's next\n\n- Learn how to [run an Apache Beam pipeline on Dataflow with\n TPUs](/dataflow/docs/tpu/use-tpus).\n- Learn how to [troubleshoot your Dataflow TPU\n job](/dataflow/docs/tpu/troubleshoot-tpus)."]]