Google Cloud accounts for bandwidth per virtual machine (VM) instance, not per network interface (vNIC) or IP address. A VM's machine type defines its maximum possible egress rate; however, you can only achieve that maximum possible egress rate in specific situations.
This page outlines expectations, which are useful when planning your deployments. It categorizes bandwidth using two dimensions:
- Egress or ingress: As used on this page, egress and ingress are always
from the perspective of a Google Cloud VM:
- Packets sent from a Google Cloud VM compose its egress (outbound) traffic.
- Packets sent to a Google Cloud VM compose its ingress (inbound) traffic.
- How the packet is routed: A packet can be routed from a sending VM or to a receiving VM using routes whose next hops are within a VPC network or routes outside of a VPC network.
Neither
additional virtual network interfaces (vNICs)
nor additional IP addresses per vNIC increase ingress or egress bandwidth for a
VM. For example, an n1-standard-8
VM with two NICs is limited to 16 Gbps
total egress bandwidth, not 16 Gbps egress bandwidth per vNIC.
All of the information on this page is applicable to Compute Engine VMs, as well as products that depend on Compute Engine VMs. For example, a Google Kubernetes Engine node is a Compute Engine VM.
Bandwidth summary
The following table illustrates bandwidth expectations based on whether a packet is sent from (egress) or received by (ingress) a VM and the packet routing method.
Egress
Bandwidth expectations | |
---|---|
Routing within a VPC network |
|
Routing outside a VPC network |
|
Ingress
Bandwidth expectations | |
---|---|
Routing within a VPC network |
|
Routing outside a VPC network |
|
Egress bandwidth
Google Cloud limits outbound (egress) bandwidth using per-VM maximum egress rates based the machine type of the VM sending the packet and whether the packet's destination is accessible using routes within a VPC network or routes outside of a VPC network. Outbound bandwidth includes packets emitted by all of the VM's NICs and data transferred to all persistent disks connected to the VM.
Per-VM maximum egress bandwidth
Per-VM maximum egress bandwidth is generally 2 Gbps per vCPU, but there are some differences and exceptions, depending on the machine series. The following table shows the range of maximum limits for egress bandwidth for traffic routed within a VPC network for standard networking tier only, not per VM Tier_1 networking performance.
Machine series | Lowest per-VM maximum egress limit without Tier_1 networking | Highest per-VM maximum egress limit without Tier_1 networking |
---|---|---|
E2 | 1 Gbps | 16 Gbps |
C3 | 23 Gbps | 100 Gbps |
C3D | 20 Gbps | 100 Gbps |
T2D | 10 Gbps | 32 Gbps |
N2,C2, N2D, and C2D | 10 Gbps | 32 Gbps |
H3 | N/A | 200 Gbps |
N1 (excluding VMs with 1 vCPU) | 10 Gbps | 32 Gbps on Intel Skylake CPU platform 16 Gbps on CPU platforms older than Intel Skylake |
N1 machine types with 1 vCPU, f1-micro, and g1-small | 2 Gbps | 2 Gbps |
A2 and G2 | Based on GPUs | Based on GPUs |
You can find the per-VM maximum egress bandwidth for every machine type listed on its specific machine family page:
- General-purpose machine family
- Compute-optimized machine family
- Memory-optimized machine family
- Accelerator-optimized machine family
Per-VM maximum egress bandwidth is not a guarantee. Actual egress bandwidth can be lowered according to factors such as the following non-exhaustive list:
- Guest Ethernet driver—gVNIC offers better performance than the VirtIO network interface
- Packet size
- Protocol overhead
- The number of flows
- Ethernet driver settings of the VM's guest OS, such as checksum offload and TCP segmentation offload (TSO)
- Network congestion
- In a situation where persistent disks compete with other network egress traffic, 60% of the maximum network bandwidth is given to Persistent Disk writes, leaving 40% for other network egress traffic. See Factors that affect disk performance in the Persistent Disk documentation for more details.
To get the largest possible per-VM maximum egress bandwidth:
- Enable per VM Tier_1 networking performance with larger general-purpose and compute-optimized machine types.
- Use the largest VPC network maximum transmission unit (MTU) supported by your network topology. Larger MTUs can reduce packet-header overhead and increase payload data throughput.
Egress to destinations routable within a VPC network
From the perspective of a sending VM and for destination IP addresses accessible by means of routes within a VPC network, Google Cloud limits outbound traffic using these rules:
- Per-VM maximum egress bandwidth: The per-VM maximum egress bandwidth described in the Per-VM maximum egress bandwidth section.
- Per-project inter-regional egress bandwidth: If a sending VM and an internal destination or its next hop are in different regions, Google Cloud enforces a maximum inter-regional egress bandwidth limit. Most customers are unlikely to reach this limit. For questions about this limit, file a support case.
- Cloud VPN and Cloud Interconnect limits: When sending
traffic from a VM to an internal IP address destination routable by a next
hop Cloud VPN tunnel or Cloud Interconnect VLAN attachment,
egress bandwidth is limited by:
- Maximum packet rate and bandwidth per Cloud VPN tunnel
- Maximum packet rate and bandwidth per VLAN attachment
- To fully use the bandwidth of multiple next hop Cloud VPN tunnels or Cloud Interconnect VLAN attachments using ECMP routing, you must use multiple TCP connections (unique 5-tuples).
Destinations routable within a VPC network include all of the following destinations, each of which is accessible from the perspective of the sending VM by a route whose next hop is not the default internet gateway:
- Regional internal IPv4 addresses in
subnet primary IPv4 and subnet secondary IPv4 address ranges,
including private IPv4 address ranges and privately used public IPv4 address
ranges, used by these destination resources:
- The primary internal IPv4 address of a receiving VM's network interface (vNIC). (When a sending VM connects to another VM's vNIC external IPv4 address, packets are routed using a next hop default internet gateway, so Egress to destinations outside of a VPC network applies instead.)
- An internal IPv4 address in an alias IP range of a receiving VM's vNIC.
- An internal IPv4 address of an internal forwarding rule for either protocol forwarding or for an internal passthrough Network Load Balancer.
- Global internal IPv4 addresses for these destination resources:
- Internal IPv6 subnet address ranges used by
these destination resources:
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack receiving VM's vNIC. - An IPv6 address from the
/96
IPv6 address range of an internal forwarding rule for either protocol forwarding or for an internal passthrough Network Load Balancer.
- An IPv6 address from the
- External IPv6 subnet address ranges used by
these destination resources when packets are routed using subnet routes
or peering subnet routes within the VPC network or by custom
routes within the VPC network that do not use the default
internet gateway next hop:
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack receiving VM's vNIC. - An IPv6 address from the
/96
IPv6 address range of an external forwarding rule for either protocol forwarding or for an external passthrough Network Load Balancer.
- An IPv6 address from the
- Other destinations accessible using the following VPC network
routes:
- Dynamic routes
- Static routes except those that use a default internet gateway next hop
- Peering custom routes
The following list ranks traffic from sending VMs to internal destinations, from highest possible bandwidth to lowest:
- Between VMs in the same zone
- Between VMs in different zones of the same region
- Between VMs in different regions
- From a VM to Google Cloud APIs and services using Private Google Access or accessing Google APIs from a VM's external IP address. This includes Private Service Connect endpoints for Google APIs.
Egress to destinations outside of a VPC network
From the perspective of a sending VM and for destination IP addresses outside of a VPC network, Google Cloud limits outbound traffic to whichever of the following rates is reached first:
Per-VM egress bandwidth: The maximum bandwidth for all connections from a VM to destinations outside of a VPC network is the smaller of the Per-VM maximum egress bandwidth and one of these rates:
- 25 Gbps, if Tier_1 networking is enabled
- 7 Gbps, if Tier_1 networking is not enabled
- 1 Gbps for H3 VMs
As an example, even though an
n2-standard-16
instance has a per-VM maximum egress bandwidth of 32 Gbps, the per-VM egress bandwidth from ann2-standard-16
instance to external destinations is either 25 Gbps or 7 Gbps, depending on whether Tier_1 networking is enabled.Per-flow maximum egress rate: The maximum bandwidth for each unique 5-tuple connection, from a VM to a destination outside of a VPC network is 3 Gbps, except on H3, where it is 1 Gbps.
Per-project internet egress bandwidth: The maximum bandwidth for all connections from VMs in each region of a project to destinations outside of a VPC network is defined by the project's Internet egress bandwidth quotas.
Destinations outside of a VPC network include all of the following destinations, each of which is accessible by a route in the sending VM's VPC network whose next hop is the default internet gateway:
- Global external IPv4 and IPv6 addresses for external proxy Network Load Balancers and external Application Load Balancers
- Regional external IPv4 addresses for Google Cloud resources, including VM vNIC external IPv4 addresses, external IPv4 addresses for external protocol forwarding, external passthrough Network Load Balancers, and response packets to Cloud NAT gateways.
- Regional external IPv6 addresses in dual-stack subnets with external IPv6 address ranges used by dual-stack VM external IPv6 addresses, external protocol forwarding, and external passthrough Network Load Balancers, provided that the subnet is located in a separate, non-peered VPC network and the destination IPv6 address range is accessible using a route in the sending VM's VPC network whose next hop is the default internet gateway. If a dual-stack subnet with an external IPv6 address range is located in the same VPC network or in a peered VPC network, see Egress to destinations routable within a VPC network instead.
- Other external destinations accessible using a static route in the sending VM's VPC network provided that the next hop for the route is the default internet gateway.
For details about which Google Cloud resources use what types of external IP addresses, see External IP addresses.
Ingress bandwidth
Google Cloud handles inbound (ingress) bandwidth depending on how the incoming packet is routed to a receiving VM.
Ingress to destinations routable within a VPC network
A receiving VM can handle as many incoming packets as its machine type, operating system, and other network conditions permit. Google Cloud does not implement any purposeful bandwidth restriction on incoming packets delivered to a VM if the incoming packet is delivered using routes within a VPC network:
- Subnet routes in the receiving VM's VPC network
- Peering subnet routes in a peered VPC network
- Routes in another network whose next hops are Cloud VPN tunnels, Cloud Interconnect (VLAN) attachments, or Router appliance VMs located in the receiving VM's VPC network
Destinations for packets that are routed within a VPC network include:
- The primary internal IPv4 address of the receiving VM's network interface (vNIC). Primary internal IPv4 addresses are regional internal IPv4 addresses that come from a subnet's primary IPv4 address range.
- An internal IPv4 address from an alias IP range of the receiving VM's vNIC. Alias IP ranges can come from either a subnet's primary IPv4 address range or one of its secondary IPv4 address ranges.
- An IPv6 address from the
/96
IPv6 address range assigned to a dual-stack receiving VM's vNIC. VM IPv6 ranges can come from these subnet IPv6 ranges:- An internal IPv6 address range.
- An external IPv6 address range when the incoming packet is routed internally to the receiving VM using one of the VPC network routes listed previously in this section.
- An internal IPv4 address of a forwarding rule used by internal protocol forwarding to the receiving VM or internal passthrough Network Load Balancer where the receiving VM is a backend of the load balancer. Internal forwarding rule IPv4 addresses come from a subnet's primary IPv4 address range.
- An internal IPv6 address from the
/96
IPv6 range of a forwarding rule used by internal protocol forwarding to the receiving VM or internal passthrough Network Load Balancer where the receiving VM is a backend of the load balancer. Internal forwarding rule IPv6 addresses come from a subnet's internal IPv6 address range. - An external IPv6 address from the
/96
IPv6 range of a forwarding rule used by external protocol forwarding to the receiving VM or external passthrough Network Load Balancer where the receiving VM is a backend of the load balancer when the incoming packet is routed within the VPC network using one of the routes listed previously in this section. External forwarding rule IPv6 addresses come from a subnet's external IPv6 address range. - An IP address within the destination range of a custom static route that uses
the receiving VM as a next hop VM (
next-hop-instance
ornext-hop-address
). - An IP address within the destination range of a custom static route using an
internal passthrough Network Load Balancer (
next-hop-ilb
) next hop, if the receiving VM is a backend for that load balancer.
Ingress to destinations outside of a VPC network
Google Cloud implements the following bandwidth restrictions for incoming packets delivered to a receiving VM using routes outside a VPC network. These bandwidth restrictions are applied individually to each receiving VM. The inbound bandwidth restriction that is applied is whichever of the following that is reached first:
- 1,800,000 packets per second
- 20 Gbps
Destinations for packets that are routed using routes outside of a VPC network include:
- An external IPv4 address assigned in a one-to-one NAT access configuration on one of the receiving VM's network interfaces (NICs).
- An external IPv6 address from the
/96
IPv6 address range assigned to a vNIC of a dual-stack receiving VM when the incoming packet is routed using a route outside of the receiving VM's VPC network. - An external IPv4 address of a forwarding rule used by external protocol forwarding to the receiving VM or external passthrough Network Load Balancer where the receiving VM is a backend of the load balancer.
- An external IPv6 address from the
/96
IPv6 range of a forwarding rule used by external protocol forwarding to the receiving VM or external passthrough Network Load Balancer where the receiving VM is a backend of the load balancer when the incoming packet is routed using a route outside of a VPC network. - Established inbound responses processed by Cloud NAT
Jumbo frames
To receive and send jumbo frames, configure the VPC network used by your VMs; set the maximum transmission unit (MTU) to a larger value, up to 8896.
Higher MTU values increase the packet size and reduce the packet-header overhead, which increases payload data throughput.
To use jumbo frames with the gVNIC driver, you must use version 1.3 or later of
the driver. Not all Google Cloud public images include this driver version. For
more information about operating system support for jumbo frames, see the
Networking features tab on the
Operating system details
page. The image versions that indicate full support for jumbo frames include
the updated gVNIC driver, even if the guest OS shows the gve driver version as
1.0.0
.
If you are using an OS image that doesn't have full support for jumbo frames,
you can manually install gVNIC driver version v1.3.0 or later. Google
recommends installing the gVNIC driver version marked Latest
to benefit from
additional features and bug fixes. You can download the gVNIC drivers from
GitHub.
To manually update the gVNIC driver version in your guest OS, see Use on non-supported operating systems.
Receive and transmit queues
Each VM vNIC is assigned a number of receive and transmit queues for processing packets from the network.
- Receive Queue (RX): Queue to receive packets. When the vNIC receives a packet from the network, the vNIC selects the descriptor for an incoming packet from the queue, processes it and hands the packet to the guest OS over a packet queue attached to a vCPU core using an interrupt. If the RX queue is full and there is no buffer available to place a packet, then the packet is dropped. This can typically happen if an application is over-utilizing a vCPU core that is also attached to the selected packet queue.
- Transmit Queue (TX): Queue to transmit packets. When the guest OS sends a packet, a descriptor is allocated and placed in the TX queue. The vNIC then processes the descriptor and transmits the packet.
Default queue allocation
Unless you explicitly assign queue counts for NICs, you can model the algorithm Google Cloud uses to assign a fixed number of RX and TX queues per vNIC in this way:
Use one of the following methods, depending on your network interface type:
- VirtIO: Divide the number of vCPUs by the number of NICs, and discard
any remainder —
[number of vCPUs/number of NICs]
. - gVNIC: Divide the number of vCPUs by the number of NICs, and then divide the result by 2 and discard any remainder — `[number of vCPUs/number of NICs/2].
This calculation always results in a whole number (not a fraction).
- VirtIO: Divide the number of vCPUs by the number of NICs, and discard
any remainder —
If the calculated number is less than 1, assign each vNIC one queue instead.
Determine if the calculated number is greater than the maximum number of queues per vNIC. The maximum number of queues per vNIC depends on the driver type:
- Using virtIO
or a custom driver, the maximum number of queues per vNIC is
32
. If the calculated number is greater than32
, ignore the calculated number, and assign each vNIC 32 queues instead. - Using gVNIC, the maximum number
of queues per vNIC is
16
. If the calculated number is greater than16
, ignore the calculated number, and assign each vNIC 16 queues instead.
- Using virtIO
or a custom driver, the maximum number of queues per vNIC is
The following examples show how to calculate the default number of queues:
If a VM uses VirtIO and has 16 vCPUs and 4 NICs, the calculated number is
[16/4] = 4
. Google Cloud assigns each vNIC four queues.If a VM using gVNIC and has 128 vCPUs and two NICs, the calculated number is
[128/2/2] = 32
. Google Cloud assigns each vNIC the maximum number of queues per vNIC possible. Google Cloud assigns16
queues per vNIC.
On Linux systems, you can use ethtool
to configure a vNIC with fewer queues
than the number of queues Google Cloud assigns per vNIC.
Custom queue allocation
Instead of the Default queue allocation, you can assign a custom queue count (total of both RX and TX) to each vNIC when you create a new VM by using the Compute Engine API.
The number of custom queues you specify must adhere to the following rules:
The minimum queue count you can assign per vNIC is one.
The maximum queue count you can assign to each vNIC is the lower of the vCPU count or the per vNIC maximum queue count, based on the driver type:
If you assign custom queue counts to all NICs of the VM, the sum of your queue count assignments must be less than or equal to the number of vCPUs assigned to the VM instance.
You can oversubscribe the custom queue count for your NICs. In other words, you can have a sum of the queue counts assigned to all NICs for your VM that is greater than the number of vCPUs for your VM. To oversubscribe the custom queue count, you must satisfy the following conditions:
- You use gVNIC as the vNIC type for all NICs configured for the VM.
- Your VM uses a machine type from the N2, N2D, C2 and C2D machine series.
- You enabled Tier_1 networking for the VM.
- You specified a custom queue count for all NICs configured for the VM.
With queue oversubscription, the maximum queue count for the VM is 16 times the number of NICs. So, if you have 6 NICs configured for a VM with 30 vCPUs, you can configure a maximum of (16 * 6), or 96 custom queues for your VM.
Examples
If a VM has 8 vCPUs and 3 NICs, the maximum queue count for the VM is the number of vCPUS, or 8. You can assign 1 queue to
nic0
, 4 queues tonic1
, and 3 queues tonic2
. In this example, you cannot subsequently assign 4 queues tonic2
while keeping the other two vNIC queue assignments because the sum of assigned queues cannot exceed the number of vCPUs (8).If a VM has 96 vCPUs and 2 NICs, you can assign both NICs up to 32 queues each when using the virtIO driver, or up to 16 queues each when using the gVNIC driver. In this example, the sum of assigned queues is always less than the number of vCPUs.
It's also possible to assign a custom queue count for only some NICs, letting Google Cloud assign queues to the remaining NICs. The number of queues you can assign per vNIC is still subject to rules mentioned previously. You can model the feasibility of your configuration, and, if your configuration is possible, the number of queues that Google Cloud assigns to the remaining NICs with this process:
Calculate the sum of queues for the NICs using custom queue assignment. For an example VM with 20 vCPUs and 6 NICs, suppose you assign
nic0
5 queues,nic1
6 queues,nic2
4 queues, and let Google Cloud assign queues fornic3
,nic4
, andnic5
. In this example, the sum of custom-assigned queues is5+6+4 = 15
.Subtract the sum of custom-assigned queues from the number of vCPUs. If the difference is not at least equal to the number of remaining NICs for which Google Cloud must assign queues, Google Cloud returns an error. Continuing with the example VM of 20 vCPUs and a sum of
15
custom-assigned queues, Google Cloud has20-15 = 5
queues left to assign to the remaining NICs (nic3
,nic4
,nic5
).Divide the difference from the previous step by the number of remaining NICs and discard any remainder —
⌊(number of vCPUs - sum of assigned queues)/(number of remaining NICs)⌋
. This calculation always results in a whole number (not a fraction) that is at least equal to one because of the constraint explained in the previous step. Google Cloud assigns each remaining NICs a queue count matching the calculated number as long as the calculated number is not greater than the maximum number of queues per vNIC. The maximum number of queues per vNIC depends on the driver type:- Using virtIO or a custom driver, if the calculated number of queues for
each remaining vNIC is greater than
32
, Google Cloud assigns each remaining vNIC32
queues. - Using gVNIC, if the calculated number of queues for each remaining vNIC
is greater than
16
, Google Cloud assigns each remaining vNIC16
queues.
- Using virtIO or a custom driver, if the calculated number of queues for
each remaining vNIC is greater than
Configure custom queue counts
To create a VM that uses a custom queue count for one or more vNICs, complete the following steps.
gcloud
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
- Use the
gcloud compute instances create
command to create the VM. Repeat the--network-interface
flag for each vNIC you want to configure for the VM, and include thequeue-count
option.
gcloud compute instances create VM_NAME \ --zone=ZONE \ --machine-type=MACHINE_TYPE \ --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \ --network-interface=network=NETWORK_NAME_1,subnet=SUBNET_1,nic-type=GVNIC,queue-count=QUEUE_SIZE_1 \ --network-interface=network=NETWORK_NAME_2,subnet=SUBNET_2,nic-type=GVNIC,queue-count=QUEUE_SIZE_2
Replace the following:
VM_NAME
: a name for the new VMZONE
: the zone to create the VM inMACHINE_TYPE
: the machine type of the VM. The machine type you specify must support gVNIC and Tier_1 networking.NETWORK_NAME
: the name of the network created previouslySUBNET_*
: the name of one of the subnets created previouslyQUEUE_SIZE
: the number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.
Terraform
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
Create a VM with specific queue counts for vNICs using the
google_compute_instance
resource. Repeat the--network-interface
parameter for each vNIC you want to configure for the VM, and include thequeue-count
parameter.# Queue oversubscription instance resource "google_compute_instance" "VM_NAME" { project = "PROJECT_ID" boot_disk { auto_delete = true device_name = "DEVICE_NAME" initialize_params { image="IMAGE_NAME" size = DISK_SIZE type = "DISK_TYPE" } } machine_type = "MACHINE_TYPE" name = "VM_NAME" zone = "ZONE" network_performance_config { total_egress_bandwidth_tier = "TIER_1" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_1 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_1" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_2 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_2" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_3 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_3"" } network_interface { nic_type = "GVNIC" queue_count = QUEUE_COUNT_4 subnetwork_project = "PROJECT_ID" subnetwork = "SUBNET_4"" } }
Replace the following:
VM_NAME
: a name for the new VMPROJECT_ID
: ID of the project to create the VM in. Unless you are using a Shared VPC network, the project you specify must be the same one in which all the subnets and networks were created in.DEVICE_NAME
: The name to associate with the boot disk in the guest OSIMAGE_NAME
: the name of an image,for example,"projects/debian-cloud/global/images/debian-11-bullseye-v20231010"
.DISK_SIZE
: the size of the boot disk, in GiBDISK_TYPE
: the type of disk to use for the boot disk, for example,pd-standard
MACHINE_TYPE
: the machine type of the VM. The machine type you specify must support gVNIC and Tier_1 networking.ZONE
: the zone to create the VM inQUEUE_COUNT
: the number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.SUBNET_*
: the name of the subnet that the network interface connects to
REST
- If you don't already have a VPC network with a subnet for each vNIC interface you plan to configure, create them.
Create a VM with specific queue counts for NICs using the
instances.insert
method. Repeat thenetworkInterfaces
property to configure multiple network interfaces.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "VM_NAME", "machineType": "machineTypes/MACHINE_TYPE", "networkPerformanceConfig": { "totalEgressBandwidthTier": TIER_1 }, "networkInterfaces": [ { "nicType": gVNIC, "subnetwork":"regions/region/subnetworks/SUBNET_1", "queueCount": "QUEUE_COUNT_1" } ], "networkInterfaces": [ { "nicType": gVNIC, "subnetwork":"regions/region/subnetworks/SUBNET_2", "queueCount": "QUEUE_COUNT_2" } ], }
Replace the following:
PROJECT_ID
: ID of the project to create the VM inZONE
: zone to create the VM inVM_NAME
: name of the new VMMACHINE_TYPE
: machine type, predefined or custom, for the new VMSUBNET_*
: the name of the subnet that the network interface connects toQUEUE_COUNT
: Number of queues for the vNIC, subject to the rules discussed in Custom queue allocation.
Queue allocations and changing the machine type
VMs are created with a default queue allocation, or you can assign a custom queue count to each virtual network interface card (vNIC) when you create a new VM by using the Compute Engine API. The default or custom vNIC queue assignments are only set when creating a VM. If your VM has vNICs which use default queue counts, you can change its machine type. If the machine type that you are changing to has a different number of vCPUs, the default queue counts for your VM are recalculated based on the new machine type.
If your VM has vNICs which use custom, non-default queue counts, you can change the machine type by using the Google Cloud CLI or Compute Engine API to update the instance properties. The conversion succeeds if the resulting VM supports the same queue count per vNIC as the original VM. For VMs that use the VirtIO-Net interface and have a custom queue count that is higher than 16 per vNIC, you can't change the machine type to a third generation machine type, which uses only gVNIC. Instead, you can migrate your VM to a third generation machine type by following the instructions in Migrate your workload from an existing VM to a new VM.
What's next
- Machine types
- Virtual machine instances
- Creating and starting a VM instance
- Configuring per VM Tier_1 networking performance
- Quickstart using a Linux VM
- Quickstart using a Windows VM