Higher network bandwidths can improve the performance of your distributed workloads running on Compute Engine virtual machine (VM) instances.
Overview
The maximum network bandwidth that is available for VMs with attached GPUs on Compute Engine is as follows:
- For N1 general-purpose VMs that have P100, P4, and K80 GPUs attached, a maximum network bandwidth of 32 Gbps is available. This is similar to the maximum rate available to N1 VMs that don't have GPUs attached. For more information about network bandwidths, see maximum egress data rate.
- For N1 general-purpose VMs that have T4 and V100 GPUs attached, you can get a maximum network bandwidth of up to 100 Gbps, based on the combination of GPU and vCPU count.
- For A2 and G2 accelerator-optimized VMs, you can get a maximum network bandwidth of up to 100 Gbps, based on the machine type.
- For A3 accelerator-optimized VMs, you can get a maximum network bandwidth of up 1,000 Gbps (1 Tbps).
Network bandwidth and Google Virtual NIC (gVNIC)
To get the higher network bandwidth rates (50 Gbps or higher) applied to your GPU VMs, it is recommended that you use Google Virtual NIC (gVNIC). For more information about creating GPU VMs that use gVNIC, see Creating GPU VMs that use higher bandwidths.
Accelerator-optimized VMs
This section outlines the maximum network bandwidth available for A3, A2, and G2 accelerator-optimized VMs.
A3 VMs
Each A3 machine type has a fixed number of NVIDIA H100 80GB GPUs attached. Each machine type also has a fixed vCPU count and memory size.
Eacha3-highgpu-8g
VM has five physical network interface cards (NICs), four
of which share the same Peripheral Component Interconnect Express
(PCIe) bus and have a non-uniform memory access (NUMA) alignment of one NIC per
two NVIDIA H100 80GB GPUs. These four physical NICs are ideal for dedicated high
bandwidth GPU to GPU communication.
The other physical NIC resides on a separate PCIe bus and is ideal for other
networking needs.
Each NIC has a maximum bandwidth of 200 Gbps, which totals to a maximum
bandwidth of 1000 Gbps (1 Tbps) per VM.
Machine type | GPU count | vCPUs | Memory | Maximum network bandwidth |
---|---|---|---|---|
a3-highgpu-8g |
8 | 208 | 1872 GB | 1,000 Gbps |
A2 VMs
Each A2 machine type has a fixed number of NVIDIA A100 40GB or NVIDIA A100 80 GB GPUs attached. Each machine type also has a fixed vCPU count and memory size.
A100 40GB
Machine type | GPU count | vCPUs | Memory | Maximum network bandwidth |
---|---|---|---|---|
a2-highgpu-1g |
1 | 12 | 85 GB | 24 Gbps |
a2-highgpu-2g |
2 | 24 | 170 GB | 32 Gbps |
a2-highgpu-4g |
4 | 48 | 340 GB | 50 Gbps |
a2-highgpu-8g |
8 | 96 | 680 GB | 100 Gbps |
a2-highgpu-16g |
16 | 96 | 1360 GB | 100 Gbps |
A100 80GB
Machine type | GPU count | vCPUs | Memory | Maximum network bandwidth |
---|---|---|---|---|
a2-ultragpu-1g |
1 | 12 | 170 GB | 24 Gbps |
a2-ultragpu-2g |
2 | 24 | 340 GB | 32 Gbps |
a2-ultragpu-4g |
4 | 48 | 680 GB | 50 Gbps |
a2-ultragpu-8g |
8 | 96 | 1360 GB | 100 Gbps |
G2 VM configuration
Each G2 machine type has a fixed number of NVIDIA L4 GPUs and vCPUs attached. Each G2 machine type also has a default memory and a custom memory range. The custom memory range defines the amount of memory that you can allocate to your VM for each machine type. You can specify your custom memory during VM creation.
Machine type | GPU count | vCPUs | Default memory | Custom memory range | Maximum network bandwidth |
---|---|---|---|---|---|
g2-standard-4 |
1 | 4 vCPUs | 16 GB | 16 - 32 GB | 10 Gbps |
g2-standard-8 |
1 | 8 vCPUs | 32 GB | 32 - 54 GB | 16 Gbps |
g2-standard-12 |
1 | 12 vCPUs | 48 GB | 48 - 54 GB | 16 Gbps |
g2-standard-16 |
1 | 16 vCPUs | 64 GB | 54 - 64 GB | 32 Gbps |
g2-standard-24 |
2 | 24 vCPUs | 96 GB | 96 - 108 GB | 32 Gbps |
g2-standard-32 |
1 | 32 vCPUs | 128 GB | 96 - 128 GB | 32 Gbps |
g2-standard-48 |
4 | 48 vCPUs | 192 GB | 192 - 216 GB | 50 Gbps |
g2-standard-96 |
8 | 96 vCPUs | 384 GB | 384 - 432 GB | 100 Gbps |
N1 GPU VMs
For N1 general-purpose VMs that have T4 and V100 GPUs attached, you can get a maximum network bandwidth of up to 100 Gbps, based on the combination of GPU and vCPU count. For all other N1 GPU VMs, see Overview.
Review the following section to calculate the maximum network bandwidth that is available for your T4 and V100 VMs based on the GPU model, vCPU, and GPU count.
Less than 5 vCPUs
For T4 and V100 VMs that have 5 vCPUs or less, a maximum network bandwidth of 10 Gbps is available.
More than 5 vCPUs
For T4 and V100 VMs that have more than 5 vCPUs, maximum network bandwidth is calculated based on the number of vCPUs and GPUs for that VM.
GPU model | Number of GPUs | Maximum network bandwidth calculation |
---|---|---|
NVIDIA V100 | 1 | min(vcpu_count * 2, 32) |
2 | min(vcpu_count * 2, 32) |
|
4 | min(vcpu_count * 2, 50) |
|
8 | min(vcpu_count * 2, 100) |
|
NVIDIA T4 | 1 | min(vcpu_count * 2, 32) |
2 | min(vcpu_count * 2, 50) |
|
4 | min(vcpu_count * 2, 100) |
Create high bandwidth VMs
To create VMs that use higher network bandwidths, see Use higher network bandwidth.
To test or verify the bandwidth speed for any configuration, you can use the benchmarking test. For more information, see Checking network bandwidth.
What's next?
- Learn more about GPU platforms.
- Learn how to create VMs with attached GPUs.
- Learn about Use higher network bandwidth.
- Learn about GPU pricing.