Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Preguntas frecuentes sobre Cloud TPU
En este documento, se incluye una lista de preguntas frecuentes sobre las Cloud TPU. Se divide en las siguientes secciones:
Preguntas frecuentes independientes del framework: Preguntas sobre el uso de Cloud TPUs, independientemente del framework de AA que uses.
Preguntas frecuentes sobre JAX: Preguntas sobre el uso de Cloud TPUs con JAX.
Preguntas frecuentes sobre PyTorch: Preguntas sobre el uso de Cloud TPUs con PyTorch.
Preguntas frecuentes independientes del framework
¿Cómo verifico qué proceso está usando la TPU en una VM de Cloud TPU?
Ejecuta tpu-info en la VM de Cloud TPU para imprimir el ID del proceso y otra información sobre el proceso que usa la TPU. Consulta las métricas admitidas y sus definiciones correspondientes.
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-04 (UTC)"],[],[],null,["# Frequently Asked Questions - Cloud TPU\n======================================\n\nThis document contains a list of frequently asked questions about Cloud TPUs. It\nis broken up into sections:\n\n1. Framework independent FAQs - questions about using Cloud TPUs regardless of what ML framework you are using.\n2. JAX FAQS - questions about using Cloud TPUs with JAX.\n3. PyTorch FAQs - questions about using Cloud TPUs with PyTorch.\n\nFramework independent FAQs\n--------------------------\n\n### How do I check which process is using the TPU on a Cloud TPU VM?\n\nRun `tpu-info` on the Cloud TPU VM to print the process ID and\nother information about the process using the TPU. See [supported metrics](/tpu/docs/tpu-monitoring-library#list-all-supported-metric-names) for the metrics\nand their corresponding definitions. \n\n tpu-info\n\nThe output from `tpu-info` is similar to the following: \n\n TPU Chips\n ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓\n ┃ Chip ┃ Type ┃ Devices ┃ PID ┃\n ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩\n │ /dev/accel0 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel1 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel2 │ TPU v4 chip │ 1 │ 130007 │\n │ /dev/accel3 │ TPU v4 chip │ 1 │ 130007 │\n └─────────────┴─────────────┴─────────┴────────┘\n\n TPU Runtime Utilization\n ┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓\n ┃ Device ┃ Memory usage ┃ Duty cycle ┃\n ┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩\n │ 0 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 1 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 2 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n │ 3 │ 0.00 GiB / 31.75 GiB │ 0.00% │\n └────────┴──────────────────────┴────────────┘\n\n TensorCore Utilization\n ┏━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n ┃ Chip ID ┃ TensorCore Utilization ┃\n ┡━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n │ 0 │ 0.00% │\n │ 1 │ 0.00% │\n │ 3 │ 0.00% │\n │ 2 │ 0.00% |\n └─────────┴────────────────────────┘\n\n Buffer Transfer Latency\n ┏━━━━━━━━━━━━━┳━━━━━┳━━━━━┳━━━━━┳━━━━━━┓\n ┃ Buffer Size ┃ P50 ┃ P90 ┃ P95 ┃ P999 ┃\n ┡━━━━━━━━━━━━━╇━━━━━╇━━━━━╇━━━━━╇━━━━━━┩\n │ 8MB+ | 0us │ 0us │ 0us │ 0us |\n └─────────────┴─────┴─────┴─────┴──────┘\n\n### How do I add a persistent disk volume to a Cloud TPU VM?\n\nFor more information, see [Add a persistent disk to a TPU VM](/tpu/docs/attach-durable-block-storage).\n\n### What storage options are supported or recommended for training with TPU VM?\n\nFor more information, see [Cloud TPU storage options](/tpu/docs/storage-options).\n\nJAX FAQs\n--------\n\n### How do I know if the TPU is being used by my program?\n\nThere are a few ways to double check JAX is using the TPU:\n\n1. Use the `jax.devices()` function. For example:\n\n assert jax.devices()[0].platform == 'tpu'\n\n2. Profile your program and verify the profile contains TPU operations. For more\n information, see [Profiling JAX programs](https://github.com/google/jax/blob/main/docs/profiling.md)\n\nFor more information, see [JAX FAQ](https://jax.readthedocs.io/en/latest/faq.html)\n\nPytorch FAQs\n------------\n\n### How do I know if the TPU is being used by my program?\n\nYou can run following python commands: \n\n \u003e\u003e\u003e import torch_xla.core.xla_model as xm\n \u003e\u003e\u003e xm.get_xla_supported_devices(devkind=\"TPU\")\n\nAnd verify if you can see any TPU devices."]]