[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[],[],null,["# TPU v2\n======\n\nThis document describes the architecture and supported configurations of\nCloud TPU v2.\n\nSystem architecture\n-------------------\n\nArchitectural details and performance characteristics of TPU v2 are available in\n[A Domain Specific Supercomputer for Training Deep Neural\nNetworks](https://dl.acm.org/doi/pdf/10.1145/3360307).\n\nConfigurations\n--------------\n\nA TPU v2 slice is composed of 512 chips interconnected with reconfigurable\nhigh-speed links. To create a TPU v2 slice, use the `--accelerator-type`\nflag in the TPU creation command (`gcloud compute tpus tpu-vm`). You specify the\naccelerator type by specifying the TPU version and the number of TPU cores. For\nexample, for a single v2 TPU, use `--accelerator-type=v2-8`. For a v2 slice\nwith 128 TensorCores, use `--accelerator-type=v2-128`.\n\nThe following command shows how to create a v2 TPU slice with 128\nTensorCores: \n\n```bash\n $ gcloud compute tpus tpu-vm create tpu-name \\\n --zone=us-central1-a \\\n --accelerator-type=v2-128 \\\n --version=tpu-ubuntu2204-base\n```\n\nFor more information about managing TPUs, see [Manage TPUs](/tpu/docs/managing-tpus-tpu-vm).\nFor more information about the TPU system architecture Cloud TPU, see\n[System architecture](/tpu/docs/system-architecture).\n\nThe following table lists the supported v2 TPU types:"]]