[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Configure networking and access to your Cloud TPU\n=================================================\n\nThis page describes how to set up custom network and access configurations for\nyour Cloud TPU, including:\n\n- Specifying a custom network and subnetwork\n- Specifying external and internal IP addresses\n- Enabling SSH access to TPUs\n- Attaching a custom service account to your TPU\n- Enabling custom SSH methods\n- Using VPC Service Controls\n\nPrerequisites\n-------------\n\nBefore you run these procedures, you must install the Google Cloud CLI, create\na Google Cloud project, and enable the Cloud TPU API. For instructions,\nsee [Set up the Cloud TPU environment](/tpu/docs/setup-gcp-account).\n\nSpecify a custom network and subnetwork\n---------------------------------------\n\nYou can optionally specify the network and subnetwork to use for the TPU. If the\nnetwork not specified, the TPU will be in the `default` network. The subnetwork\nneeds to be in the same region as the zone where the TPU runs.\n\n1. Create a network that matches one of the following valid formats:\n\n - `compute/{version}/projects/{proj-id}/global/networks/{network}`\n - `compute/{version}/projects/{proj-##}/global/networks/{network}`\n - `projects/{proj-id}/global/networks/{network}`\n - `projects/{proj-##}/global/networks/{network}`\n - `global/networks/{network}`\n - `{network}`\n\n For more information, see [Create and manage VPC\n networks](/vpc/docs/create-modify-vpc-networks).\n2. Create a subnetwork that matches one of the following valid formats:\n\n - `compute/{version}/projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}`\n - `compute/{version}/projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}`\n - `projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}`\n - `projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}`\n - `regions/{region}/subnetworks/{subnetwork}`\n - `{subnetwork}`\n\n For more information, see [Create and manage VPC\n networks](/vpc/docs/create-modify-vpc-networks).\n3. Create a TPU VM, specifying the custom network and subnetwork:\n\n ### gcloud\n\n To specify the network and subnetwork using the `gcloud` CLI, add the\n `--network` and `--subnetwork` flags to your create request: \n\n ```bash\n $ gcloud compute tpus tpu-vm create TPU_NAME \\\n --zone=us-central2-b \\\n --accelerator-type=v4-8 \\\n --version=TPU_SOFTWARE_VERSION \\\n --network=NETWORK \\\n --subnetwork=SUBNETWORK\n ```\n\n ### curl\n\n To specify the network and subnetwork in a `curl` call, add the\n `network` and `subnetwork` fields to the request body: \n\n ```bash\n $ curl -X POST -H \"Authorization: Bearer $(gcloud auth print-access-token)\" -H \"Content-Type: application/json\" -d \"{accelerator_type: 'v4-8', \\\n runtime_version:'\u003cvar translate=\"no\"\u003etpu-vm-tf-2.17.1-pjrt\u003c/var\u003e', \\\n network_config: {network: '\u003cvar translate=\"no\"\u003eNETWORK\u003c/var\u003e', subnetwork: '\u003cvar translate=\"no\"\u003eSUBNETWORK\u003c/var\u003e', enable_external_ips: true}, \\\n shielded_instance_config: { enable_secure_boot: true }}\" \\\n https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME\n ```\n\nUnderstand external and internal IP addresses\n---------------------------------------------\n\nWhen you create TPU VMs, they always come with internal IP addresses\nautomatically. If the TPU VMs are created through gcloud CLI,\nexternal IP addresses are generated by default. If they are created through\nthe Cloud TPU REST APIs (tpu.googleapis.com), no external IP address is\nassigned by default. You can change the default behavior in both cases.\n\n### External IP addresses\n\nWhen you [create a TPU](/tpu/docs/managing-tpus-tpu-vm#provision-tpu) using\n`gcloud`, external IP addresses are created by default for each TPU VM.\nIf you want to create a TPU VM without external IP, use the\n`--internal-ips` flag shown in the following examples when you create the TPU VM. \n\n### gcloud\n\nIf you are using queued resources: \n\n```bash\ngcloud compute tpus queued-resources create your-queued-resource-id \\\n --node-id your-node-id \\\n --project your-project \\\n --zone us-central2-b \\\n --accelerator-type v4-8 \\\n --runtime-version tpu_software_version \\\n --internal-ips\n```\n\nIf you are using the Create Node API: \n\n```bash\n$ gcloud compute tpus tpu-vm create TPU_NAME \\\n --zone=us-central2-b \\\n --accelerator-type=v4-8 \\\n --version=tpu_software_version \\\n --internal-ips\n```\n\n### curl\n\nSet the `enable_external_ips` field to `false` in the request body: \n\n```bash\n$ curl -X POST -H \"Authorization: Bearer $(gcloud auth print-access-token)\" -H \"Content-Type: application/json\" -d \"{accelerator_type: 'v4-8', \\\n runtime_version:'\u003cvar translate=\"no\"\u003etpu-vm-tf-2.17.1-pjrt\u003c/var\u003e', \\\n network_config: {enable_external_ips: false}, \\\n shielded_instance_config: { enable_secure_boot: true }}\" \\\n https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME\n```\n\nTo create a TPU VM with an external IP address when using the REST API\n(`tpu.googleapis.com`),\nset the `networkconfig.enable_external_ips` field in the request to `true`.\n\n### Internal IP addresses\n\nTPU VMs always have internal IP addresses. Cloud TPU users might want\nto restrict their TPU VMs to internal IP addresses only for a few key reasons:\n\nEnhanced Security: Internal IPs are only accessible to resources within the\nsame VPC network, which can improve security by limiting external access to\nthe TPU VMs. This is especially important when working with sensitive data\nor when you want to restrict access to your TPUs to specific users or\nsystems within your network.\n\nCost Savings: By using internal IP addresses, you can avoid the costs\nassociated with external IP addresses, which can be significant for a large\nnumber of TPU VMs.\n\nImproved Network Performance: Internal IPs can lead to better network\nperformance because the traffic stays within Google's network, avoiding\nthe overhead of routing through the public internet. This is particularly\nrelevant for large-scale machine learning workloads that need\nhigh-bandwidth communication between TPU VMs.\n\nEnable custom SSH methods\n-------------------------\n\nTo connect to TPUs using SSH, you need to either enable external IP addresses\nfor the TPUs, or enable\n[Private Google Access](/vpc/docs/configure-private-google-access) for the\nsubnetwork to which the TPU VMs are connected.\n\nEnable Private Google Access\n----------------------------\n\nTPUs that don't have external IP addresses can use Private Google Access to\naccess Google APIs and services. For more information about enabling\nPrivate Google Access, see [Configure\nPrivate Google Access](/vpc/docs/configure-private-google-access).\n\nAfter you have configured Private Google Access, [connect to the VM using\nSSH](/tpu/docs/managing-tpus-tpu-vm#tpu-connect).\n\nAttach a custom service account\n-------------------------------\n\nEach TPU VM has an associated service account it uses to make API requests on\nyour behalf. TPU VMs use this service account to call Cloud TPU\nAPIs and access Cloud Storage and other services. By default, your TPU VM\nuses the [default Compute Engine service\naccount](/compute/docs/access/service-accounts#default_service_account).\n\nThe service account must be defined in the same Google Cloud project where you\ncreate your TPU VM. Custom service accounts used for TPU VMs must have the [TPU\nViewer](/iam/docs/understanding-roles#tpu.viewer) role to call the Cloud TPU\nAPI. If the code running in your TPU VM calls other Google Cloud services, it\nmust have the roles necessary to access those services.\n\nFor more information about service accounts, see\n[Service accounts](/compute/docs/access/service-accounts#serviceaccount).\n\nUse the following commands to specify a custom service account. \n\n### gcloud\n\nUse the `--service-account` flag when creating a TPU: \n\n```bash\n$ gcloud compute tpus tpu-vm create TPU_NAME \\\n --zone=us-central2-b \\\n --accelerator-type=TPU_TYPE \\\n --version=tpu-vm-tf-2.17.1-pjrt \\\n --service-account=SERVICE_ACCOUNT\n```\n\n### curl\n\nSet the `service_account` field in the request body: \n\n```bash\n$ curl -X POST -H \"Authorization: Bearer $(gcloud auth print-access-token)\" -H \"Content-Type: application/json\" -d \"{accelerator_type: 'v4-8', \\\n runtime_version:'\u003cvar translate=\"no\"\u003etpu-vm-tf-2.17.1-pjrt\u003c/var\u003e', \\\n network_config: {enable_external_ips: true}, \\\n shielded_instance_config: { enable_secure_boot: true }}\" \\\n service_account: {email: '\u003cvar translate=\"no\"\u003eSERVICE_ACCOUNT\u003c/var\u003e'} \\\n https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME\n```\n\nEnable custom SSH methods\n-------------------------\n\nThe default network allows SSH access to all TPU VMs. If you\nuse a network other than the default or you change the default network settings,\nyou need to explicitly enable SSH access by adding a firewall rule: \n\n```bash\n$ gcloud compute firewall-rules create \\\n --network=NETWORK allow-ssh \\\n --allow=tcp:22\n```\n\nIntegrate with VPC Service Controls\n-----------------------------------\n\nCloud TPU VPC Service Controls lets you define security perimeters around\nyour Cloud TPU resources and control the movement of data across the perimeter\nboundary. To learn more about VPC Service Controls, see\n[VPC Service Controls overview](/vpc-service-controls/docs/overview).\nTo learn about the limitations in using Cloud TPU with VPC Service Controls, see\n[supported products and limitations](/vpc-service-controls/docs/supported-products)."]]