Configure networking and access to your Cloud TPU
This page describes how to set up custom network and access configurations for your Cloud TPU, including:
- Specifying a custom network and subnetwork
- Specifying external and internal IP addresses
- Enabling SSH access to TPUs
- Attaching a custom service account to your TPU
- Enabling custom SSH methods
- Using VPC Service Controls
Prerequisites
Before you run these procedures, you must install the Google Cloud CLI, create a Google Cloud project, and enable the Cloud TPU API. For instructions, see Set up the Cloud TPU environment.
Specify a custom network and subnetwork
You can optionally specify the network and subnetwork to use for the TPU. If the
network not specified, the TPU will be in the default
network. The subnetwork
needs to be in the same region as the zone where the TPU runs.
Create a network that matches one of the following valid formats:
compute/{version}/projects/{proj-id}/global/networks/{network}
compute/{version}/projects/{proj-##}/global/networks/{network}
projects/{proj-id}/global/networks/{network}
projects/{proj-##}/global/networks/{network}
global/networks/{network}
{network}
For more information, see Create and manage VPC networks.
Create a subnetwork that matches one of the following valid formats:
compute/{version}/projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}
compute/{version}/projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}
projects/{proj-id}/regions/{region}/subnetworks/{subnetwork}
projects/{proj-##}/regions/{region}/subnetworks/{subnetwork}
regions/{region}/subnetworks/{subnetwork}
{subnetwork}
For more information, see Create and manage VPC networks.
Create a TPU VM, specifying the custom network and subnetwork:
gcloud
To specify the network and subnetwork using the
gcloud
CLI, add the--network
and--subnetwork
flags to your create request:$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=v4-8 \ --version=TPU_SOFTWARE_VERSION \ --network=NETWORK \ --subnetwork=SUBNETWORK
curl
To specify the network and subnetwork in a
curl
call, add thenetwork
andsubnetwork
fields to the request body:$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.17.1-pjrt', \ network_config: {network: 'NETWORK', subnetwork: 'SUBNETWORK', enable_external_ips: true}, \ shielded_instance_config: { enable_secure_boot: true }}" \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
Understand external and internal IP addresses
When you create TPU VMs, they always come with internal IP addresses automatically. If the TPU VMs are created through gcloud CLI, external IP addresses are generated by default. If they are created through the Cloud TPU REST APIs (tpu.googleapis.com), no external IP address is assigned by default. You can change the default behavior in both cases.
External IP addresses
When you create a TPU using
gcloud
, external IP addresses are created by default for each TPU VM.
If you want to create a TPU VM without external IP, use the
--internal-ips
flag shown in the following examples when you create the TPU VM.
gcloud
If you are using queued resources:
gcloud compute tpus queued-resources create your-queued-resource-id \ --node-id your-node-id \ --project your-project \ --zone us-central2-b \ --accelerator-type v4-8 \ --runtime-version tpu_software_version \ --internal-ips
If you are using the Create Node API:
$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=v4-8 \ --version=tpu_software_version \ --internal-ips
curl
Set the enable_external_ips
field to false
in the request body:
$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.17.1-pjrt', \ network_config: {enable_external_ips: false}, \ shielded_instance_config: { enable_secure_boot: true }}" \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
To create a TPU VM with an external IP address when using the REST API
(tpu.googleapis.com
),
set the networkconfig.enable_external_ips
field in the request to true
.
Internal IP addresses
TPU VMs always have internal IP addresses. Cloud TPU users might want to restrict their TPU VMs to internal IP addresses only for a few key reasons:
Enhanced Security: Internal IPs are only accessible to resources within the same VPC network, which can improve security by limiting external access to the TPU VMs. This is especially important when working with sensitive data or when you want to restrict access to your TPUs to specific users or systems within your network.
Cost Savings: By using internal IP addresses, you can avoid the costs associated with external IP addresses, which can be significant for a large number of TPU VMs.
Improved Network Performance: Internal IPs can lead to better network performance because the traffic stays within Google's network, avoiding the overhead of routing through the public internet. This is particularly relevant for large-scale machine learning workloads that need high-bandwidth communication between TPU VMs.
Enable custom SSH methods
To connect to TPUs using SSH, you need to either enable external IP addresses for the TPUs, or enable Private Google Access for the subnetwork to which the TPU VMs are connected.
Enable Private Google Access
TPUs that don't have external IP addresses can use Private Google Access to access Google APIs and services. For more information about enabling Private Google Access, see Configure Private Google Access.
After you have configured Private Google Access, connect to the VM using SSH.
Attach a custom service account
Each TPU VM has an associated service account it uses to make API requests on your behalf. TPU VMs use this service account to call Cloud TPU APIs and access Cloud Storage and other services. By default, your TPU VM uses the default Compute Engine service account.
The service account must be defined in the same Google Cloud project where you create your TPU VM. Custom service accounts used for TPU VMs must have the TPU Viewer role to call the Cloud TPU API. If the code running in your TPU VM calls other Google Cloud services, it must have the roles necessary to access those services.
For more information about service accounts, see Service accounts.
Use the following commands to specify a custom service account.
gcloud
Use the --service-account
flag when creating a TPU:
$ gcloud compute tpus tpu-vm create TPU_NAME \ --zone=us-central2-b \ --accelerator-type=TPU_TYPE \ --version=tpu-vm-tf-2.17.1-pjrt \ --service-account=SERVICE_ACCOUNT
curl
Set the service_account
field in the request body:
$ curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" -d "{accelerator_type: 'v4-8', \ runtime_version:'tpu-vm-tf-2.17.1-pjrt', \ network_config: {enable_external_ips: true}, \ shielded_instance_config: { enable_secure_boot: true }}" \ service_account: {email: 'SERVICE_ACCOUNT'} \ https://tpu.googleapis.com/v2/projects/PROJECT_ID/locations/us-central2-b/nodes?node_id=TPU_NAME
Enable custom SSH methods
The default network allows SSH access to all TPU VMs. If you use a network other than the default or you change the default network settings, you need to explicitly enable SSH access by adding a firewall rule:
$ gcloud compute firewall-rules create \ --network=NETWORK allow-ssh \ --allow=tcp:22
Integrate with VPC Service Controls
Cloud TPU VPC Service Controls lets you define security perimeters around your Cloud TPU resources and control the movement of data across the perimeter boundary. To learn more about VPC Service Controls, see VPC Service Controls overview. To learn about the limitations in using Cloud TPU with VPC Service Controls, see supported products and limitations.