A Virtual Private Cloud (VPC) network is a virtual version of a physical network, implemented inside of Google's production network, using Andromeda. A VPC network provides the following:
- Provides connectivity for your Compute Engine virtual machine (VM) instances, including Google Kubernetes Engine (GKE) clusters, App Engine flexible environment instances, and other Google Cloud products built on Compute Engine VMs.
- Offers native Internal TCP/UDP Load Balancing and proxy systems for Internal HTTP(S) Load Balancing.
- Connects to on-premises networks using Cloud VPN tunnels and Cloud Interconnect attachments.
- Distributes traffic from Google Cloud external load balancers to backends.
Projects can contain multiple VPC networks. Unless you create an organizational policy that prohibits it, new projects start with a default network (an auto mode VPC network) that has one subnetwork (subnet) in each region.
VPC networks have the following properties:
VPC networks, including their associated routes and firewall rules, are global resources. They are not associated with any particular region or zone.
Subnets are regional resources. Each subnet defines a range of IPv4 addresses.
Traffic to and from instances can be controlled with network firewall rules. Rules are implemented on the VMs themselves, so traffic can only be controlled and logged as it leaves or arrives at a VM.
Resources within a VPC network can communicate with one another by using internal IPv4 addresses, subject to applicable network firewall rules. For more information, see communication within the network.
Network administration can be secured by using Identity and Access Management (IAM) roles.
An organization can use Shared VPC to keep a VPC network in a common host project. Authorized IAM principals from other projects in the same organization can create resources that use subnets of the Shared VPC network.
VPC networks can be connected to other VPC networks in different projects or organizations by using VPC Network Peering.
VPC networks support GRE traffic, including traffic on Cloud VPN and Cloud Interconnect. VPC networks do not support GRE for Cloud NAT or for forwarding rules for load balancing and protocol forwarding. Support for GRE allows you to terminate GRE traffic on a VM from the internet (external IP address) and Cloud VPN or Cloud Interconnect (internal IP address). The decapsulated traffic can then be forwarded to a reachable destination. GRE enables you to use services such as Secure Access Service Edge (SASE) and SD-WAN.
VPC networks support IPv4 unicast addresses. VPC networks also support external IPv6 unicast addresses in some regions. For more information about IPv6 support, see IPv6 addresses. VPC networks do not support broadcast or multicast addresses within the network.
Network and subnet terminology
The terms subnet and subnetwork are synonymous. They are used
interchangeably in the Google Cloud Console,
gcloud commands, and API
A subnet is not the same thing as a (VPC) network. Networks and subnets are different types of objects in Google Cloud.
Networks and subnets
Each VPC network consists of one or more useful IP range partitions called subnets. Each subnet is associated with a region. VPC networks do not have any IP address ranges associated with them. IP ranges are defined for the subnets.
A network must have at least one subnet before you can use it. Auto mode VPC networks create subnets in each region automatically. Custom mode VPC networks start with no subnets, giving you full control over subnet creation. You can create more than one subnet per region. For information about the differences between auto mode and custom mode VPC networks, see types of VPC networks.
When you create a resource in Google Cloud, you choose a network and subnet. For resources other than instance templates, you also select a zone or a region. Selecting a zone implicitly selects its parent region. Because subnets are regional objects, the region that you select for a resource determines the subnets that it can use:
The process of creating an instance involves selecting a zone, a network, and a subnet. The subnets available for selection are restricted to those in the selected region. Google Cloud assigns the instance an IPv4 address from the range of available addresses in the subnet.
The process of creating a managed instance group involves selecting a zone or region, depending on the group type, and an instance template. The instance templates available for selection are restricted to those whose defined subnets are in the same region selected for the managed instance group.
- Instance templates are global resources. The process of creating an instance template involves selecting a network and a subnet. If you select an auto mode VPC network, you can choose to use auto subnets to defer subnet selection to one that is available in the selected region of any managed instance group that would use the template. Auto mode VPC networks have a subnet in every region by definition.
The process of creating a Kubernetes container cluster involves selecting a zone or region (depending on the cluster type), a network, and a subnet. The subnets available for selection are restricted to those in the selected region.
Subnet creation mode
Google Cloud offers two types of VPC networks, determined by their subnet creation mode:
When an auto mode VPC network is created, one subnet from each region is automatically created within it. These automatically created subnets use a set of predefined IPv4 ranges that fit within the
10.128.0.0/9CIDR block. As new Google Cloud regions become available, new subnets in those regions are automatically added to auto mode VPC networks by using an IP range from that block. In addition to the automatically created subnets, you can add more subnets manually to auto mode VPC networks in regions that you choose by using IP ranges outside of
When a custom mode VPC network is created, no subnets are automatically created. This type of network provides you with complete control over its subnets and IP ranges. You decide which subnets to create in regions that you choose by using IP ranges that you specify.
You can switch a VPC network from auto mode to custom mode. This is a one-way conversion; custom mode VPC networks cannot be changed to auto mode VPC networks. To help you decide which type of network meets your needs, see the considerations for auto mode VPC networks.
Unless you choose to disable it, each new project starts with a default network. The default network is an auto mode VPC network with pre-populated IPv4 firewall rules. The default network does not have pre-populated IPv6 firewall rules.
You can disable the creation of default networks by creating an organization
Projects that inherit this policy won't have a default network.
Considerations for auto mode VPC networks
Auto mode VPC networks are easy to set up and use, and they are well suited for use cases with these attributes:
Having subnets automatically created in each region is useful.
The predefined IP ranges of the subnets do not overlap with IP ranges that you would use for different purposes (for example, Cloud VPN connections to on-premises resources).
However, custom mode VPC networks are more flexible and are better suited to production. The following attributes highlight use cases where custom mode VPC networks are recommended or required:
Having one subnet automatically created in each region isn't necessary.
Having new subnets automatically created as new regions become available could overlap with IP addresses used by manually created subnets or static routes, or could interfere with your overall network planning.
You need complete control over the subnets created in your VPC network, including regions and IP address ranges used.
You plan to connect VPC networks by using VPC Network Peering or Cloud VPN. Because the subnets of every auto mode VPC network use the same predefined range of IP addresses, you cannot connect auto mode VPC networks to one another.
When you create a subnet, you must define its primary IP address range. The primary internal addresses for the following resources come from the subnet's primary range: VM instances, internal load balancers, and internal protocol forwarding. You can optionally add secondary IP address ranges to a subnet, which are only used by alias IP ranges. However, you can configure alias IP ranges for instances from the primary or secondary range of a subnet.
Your subnets don't need to form a predefined contiguous CIDR block, but you can do that if desired. For example, auto mode VPC networks do create subnets that fit within a predefined auto mode IP range.
For more information, see working with subnets.
A subnet's primary and secondary IP address ranges are regional internal IP addresses. The following table describes valid ranges.
|Private IP address ranges|
||Private IP addresses RFC 1918|
||Shared address space RFC 6598|
||IETF protocol assignments RFC 6890|
||Documentation RFC 5737|
||IPv6 to IPv4 relay (deprecated) RFC 7526|
||Benchmark testing RFC 2544|
Some operating systems do not support the use of this range, so verify that your OS supports it before creating subnets that use this range.
|Privately used public IP address ranges|
|Privately used public IP addresses||
Privately used public IP addresses:
When you use these addresses as subnet ranges Google Cloud does not announce these routes to the internet and does not route traffic from the internet to them.
If you have imported public IP addresses to Google using Bring your own IP (BYOIP), your BYOIP ranges and privately used public IP address ranges in the same VPC network must not overlap.
For VPC Network Peering, subnet routes for public IP addresses are not automatically exchanged. The subnet routes are automatically exported by default, but peer networks must be explicitly configured to import them in order to use them.
Subnet ranges have the following constraints:
Subnet ranges cannot match, be narrower, or be broader than a restricted range. For example,
22.214.171.124/8is not a valid subnet range because it overlaps with the link-local range
169.254.0.0/16(RFC 3927), which is a restricted range.
Subnet ranges cannot span an RFC range (described in the previous table) and a privately used public IP address range. For example,
172.16.0.0/10is not a valid subnet range because it overlaps with RFC 1918 and public IP addresses.
Subnet ranges cannot span multiple RFC ranges. For example,
192.0.0.0/8isn't a valid subnet range because it includes both
192.168.0.0/16(from RFC 1918) and
192.0.0.0/24(from RFC 6890). However, you can create two subnets with different primary ranges, one with
192.168.0.0/16and one with
192.0.0.0/24. Or, you could use both of these ranges on the same subnet if you make one of them a secondary range.
Prohibited subnet ranges
Prohibited subnet ranges include Google public IP addresses and commonly reserved RFC ranges, as described in the following table. These ranges cannot be used for subnet ranges.
|Public IP addresses for Google APIs and services, including Google Cloud netblocks.||You can find these IP addresses at https://gstatic.com/ipranges/goog.txt.|
|Private Google Access-specific virtual IP addresses|
||Current (local) network RFC 1122|
||Local host RFC 1122|
||Link-local RFC 3927|
||Multicast (Class D) RFC 5771|
||Limited broadcast destination address RFC 8190 and RFC 919|
Reserved IP addresses in a subnet
Every subnet has four reserved IP addresses in its primary IP range. There are no reserved IP addresses in the secondary IP ranges.
|Reserved IP address||Description||Example|
|Network||First address in the primary IP range for the subnet|
|Default gateway||Second address in the primary IP range for the subnet|
|Second-to-last address||Second-to-last address in the primary IP range for the subnet that is reserved by Google Cloud for potential future use|
|Broadcast||Last address in the primary IP range for the subnet|
Auto mode IPv4 ranges
This table lists the Iv4P ranges for the automatically created subnets in an auto
mode VPC network. IP ranges for these subnets fit inside the
10.128.0.0/9 CIDR block. Auto mode VPC networks are built with
one subnet per region at creation time and automatically receive new subnets in
new regions. Unused portions of
10.128.0.0/9 are reserved for future
Google Cloud use.
|Region||IP range (CIDR)||Default gateway||Usable addresses (inclusive)|
|asia-east1||10.140.0.0/20||10.140.0.1||10.140.0.2 to 10.140.15.253|
|asia-east2||10.170.0.0/20||10.170.0.1||10.170.0.2 to 10.170.15.253|
|asia-northeast1||10.146.0.0/20||10.146.0.1||10.146.0.2 to 10.146.15.253|
|asia-northeast2||10.174.0.0/20||10.174.0.1||10.174.0.2 to 10.174.15.253|
|asia-northeast3||10.178.0.0/20||10.178.0.1||10.178.0.2 to 10.178.15.253|
|asia-south1||10.160.0.0/20||10.160.0.1||10.160.0.2 to 10.160.15.253|
|asia-south2||10.190.0.0/20||10.190.0.1||10.190.0.2 to 10.190.15.253|
|asia-southeast1||10.148.0.0/20||10.148.0.1||10.148.0.2 to 10.148.15.253|
|asia-southeast2||10.184.0.0/20||10.184.0.1||10.184.0.2 to 10.184.15.253|
|australia-southeast1||10.152.0.0/20||10.152.0.1||10.152.0.2 to 10.152.15.253|
|australia-southeast2||10.192.0.0/20||10.192.0.1||10.192.0.2 to 10.192.15.253|
|europe-central2||10.186.0.0/20||10.186.0.1||10.186.0.2 to 10.186.15.253|
|europe-north1||10.166.0.0/20||10.166.0.1||10.166.0.2 to 10.166.15.253|
|europe-west1||10.132.0.0/20||10.132.0.1||10.132.0.2 to 10.132.15.253|
|europe-west2||10.154.0.0/20||10.154.0.1||10.154.0.2 to 10.154.15.253|
|europe-west3||10.156.0.0/20||10.156.0.1||10.156.0.2 to 10.156.15.253|
|europe-west4||10.164.0.0/20||10.164.0.1||10.164.0.2 to 10.164.15.253|
|europe-west6||10.172.0.0/20||10.172.0.1||10.172.0.2 to 10.172.15.253|
|northamerica-northeast1||10.162.0.0/20||10.162.0.1||10.162.0.2 to 10.162.15.253|
|northamerica-northeast2||10.188.0.0/20||10.188.0.1||10.188.0.2 to 10.188.15.253|
|southamerica-east1||10.158.0.0/20||10.158.0.1||10.158.0.2 to 10.158.15.253|
|southamerica-west1||10.194.0.0/20||10.194.0.1||10.194.0.2 to 10.194.15.253|
|us-central1||10.128.0.0/20||10.128.0.1||10.128.0.2 to 10.128.15.253|
|us-east1||10.142.0.0/20||10.142.0.1||10.142.0.2 to 10.142.15.253|
|us-east4||10.150.0.0/20||10.150.0.1||10.150.0.2 to 10.150.15.253|
|us-west1||10.138.0.0/20||10.138.0.1||10.138.0.2 to 10.138.15.253|
|us-west2||10.168.0.0/20||10.168.0.1||10.168.0.2 to 10.168.15.253|
|us-west3||10.180.0.0/20||10.180.0.1||10.180.0.2 to 10.180.15.253|
|us-west4||10.182.0.0/20||10.182.0.1||10.182.0.2 to 10.182.15.253|
When you enable IPv6 for a
VM, the VM
is allocated a
/96 IPv6 address range. The first IP address in that range is
assigned to the primary interface using DHCPv6.
The IPv6 addresses assigned to subnets and VMs are external addresses. They can be used for VM to VM communication, and are also routable on the internet. To control egress to and ingress from the internet, configure firewall rules or hierarchical firewall policies. To disable IPv6 routing to the internet completely, delete the IPv6 default route in the VPC network.
The following resources support IPv6 addresses if they are connected to a subnet with IPv6 enabled:
Also, external HTTP(S) Load Balancing supports global external IPv6 addresses when configured in Premium Tier. You can't use an IPv6 address that is assigned to a subnet for HTTP(S) Load Balancing. Instead, you reserve an IP address.
Subnets that support IPv6
You can enable IPv6 on the following types of subnets in a VPC network. You cannot enable IPv6 on a legacy network.
You can enable IPv6 on new or existing subnets in a custom mode VPC network.
You can enable IPv6 on subnets that you have added to an auto mode VPC network (including the default network).
You cannot enable IPv6 on subnets that are automatically created in an auto mode VPC network (including the default network).
If you convert an auto mode network to custom mode, you can enable IPv6 on any of the subnets in that network.
Regions that support IPv6
IPv6 support for subnets and VM instances is available in the following regions:
Routes and firewall rules
Routes define paths for packets leaving instances (egress traffic). For details about Google Cloud route types, see the routes overview.
Dynamic routing mode
For a description of dynamic routing mode options, see Effects of dynamic routing mode in the Cloud Router documentation.
Route advertisements and internal IP addresses
Regional internal IP addresses and global internal IP addresses are advertised within a within a VPC network. Primary and secondary subnet IPv4 address ranges use regional internal addresses, and Private Service Connect endpoints for Google APIs use global internal IP addresses.
If you connect VPC networks using VPC Network Peering, subnet ranges using private IP addresses are always exchanged. You can control whether subnet ranges using privately used public IP addresses are exchanged. Global internal IP addresses are never exchanged using peering. For additional details, see the VPC Network Peering documentation.
When you connect a VPC network to another network, such as an on-premises network, using a Google Cloud connectivity product like Cloud VPN, Cloud Interconnect, or Router appliance:
- You can advertise the VPC network's internal IP addresses to another network (such as an on-premises network).
- Though connectivity between a VPC network and another network (such as an on-premises network) can use private routing provided by a Google Cloud connectivity product, the other network's IP addresses might also be publicly routable. Keep this in mind if an on-premises network uses publicly routable IP addresses.
- VM instances in a VPC network containing subnet ranges with privately used public IP addresses are not able to connect to external resources which use those same public IP addresses.
- Take extra care when advertising privately used public IP addresses to another network (such as an on-premises network), especially when the other network can advertise those public IP addresses to the internet.
Both hierarchical firewall policies and VPC firewall rules apply to packets sent to and from VM instances (and resources that depend on VMs, such as Google Kubernetes Engine nodes). Both types of firewalls control traffic even if it is between VMs in the same VPC network.
To monitor which firewall rule allowed or denied a particular connection, see Firewall Rules Logging.
Communications and access
Communication within the network
The system-generated subnet routes define the paths for sending traffic among instances within the network by using internal IP addresses. For one instance to be able to communicate with another, appropriate firewall rules must also be configured because every network has an implied deny firewall rule for ingress traffic.
Except for the default network, you must explicitly create higher priority
ingress firewall rules
to allow instances to communicate with one another. The default network includes
several firewall rules in addition to the implied ones,
default-allow-internal rule, which permits instance-to-instance
communication within the network. The default network also comes with ingress
rules allowing protocols such as RDP and SSH.
Rules that come with the default network are also presented as options for you to apply to new auto mode VPC networks that you create by using the Cloud Console.
Internet access requirements
The following criteria must be satisfied for an instance to have outgoing internet access:
The network must have a valid default internet gateway route or custom route whose destination IP range is the most general (
0.0.0.0/0). This route defines the path to the internet. For more information, see Routes.
Firewall rules must allow egress traffic from the instance. Unless overridden by a higher priority rule, the implied allow rule for egress traffic permits outbound traffic from all instances.
One of the following must be true:
Communications and access for App Engine
VPC firewall rules apply to resources running in the VPC network, such as Compute Engine VMs. For App Engine instances, firewall rules work as follows:
App Engine standard environment: Only App Engine firewall rules apply to ingress traffic. Because App Engine standard environment instances do not run inside your VPC network, VPC firewall rules do not apply to them.
App Engine flexible environment: Both App Engine and VPC firewall rules apply to ingress traffic. Inbound traffic is only permitted if it is allowed by both types of firewall rules. For outbound traffic, VPC firewall rules apply.
For more information about how to control access to App Engine instances, see App security.
Traceroute to external IP addresses
For internal reasons, Google Cloud increases the TTL counter of packets
that traverse next hops in Google's network. Tools like
might provide incomplete results because the TTL doesn't expire on some of the
hops. Hops that are inside and outside of Google's network might be hidden in
When you send packets from Compute Engine instances to external IP addresses, including external IP addresses of other Google Cloud resources or destinations on the internet.
When you send packets to the external IP address associated with a Compute Engine instance or other Google Cloud resource.
The number of hidden hops varies based on the instance's Network Service Tiers,
region, and other factors. If there are only a few hops, it's possible for all
of them to be hidden. Missing hops from a
mtr result don't
mean that outbound traffic is dropped.
There is no workaround for this behavior. You must take it into account if you configure third-party monitoring that connects to an external IP address associated with a VM.
Egress throughput limits
Network throughput information is available on the Network bandwidth page in the Compute Engine documentation.
Information about packet size is in the maximum transmission unit section.
VPC network example
The following example illustrates a custom mode VPC network with three subnets in two regions:
- Subnet1 is defined as
10.240.0.0/24in the us-west1 region.
- Two VM instances in the us-west1-a zone are in this subnet. Their IP addresses both come from the available range of addresses in subnet1.
- Subnet2 is defined as
192.168.1.0/24in the us-east1 region.
- Two VM instances in the us-east1-a zone are in this subnet. Their IP addresses both come from the available range of addresses in subnet2.
- Subnet3 is defined as
10.2.0.0/16, also in the us-east1 region.
- One VM instance in the us-east1-a zone and a second instance in the us-east1-b zone are in subnet3, each receiving an IP address from its available range. Because subnets are regional resources, instances can have their network interfaces associated with any subnet in the same region that contains their zones.
Maximum transmission unit
VPC networks have a default maximum transmission unit
1460 bytes. However, you can configure your VPC networks to
have an MTU of
The MTU is the size, in bytes, of the largest packet supported by a network layer protocol, including both headers and data. In Google Cloud, you set the MTU for each VPC network, and VM instances that use that network must also be configured to use that MTU for their interfaces. The network's MTU setting is communicated to a VM when that VM requests an IP address using DHCP. DHCP Option 26 contains the network's MTU.
The MTU impacts both UDP and TCP traffic:
- If a UDP packet is sent that is larger than the destination can receive
or that exceeds the MTU on some network link on the path to the
destination, then the packet is dropped if the Don't-Fragment flag is
set. When it gets dropped, an ICMP packet of the type
Fragmentation-Neededis sent back to the sender. For more information on path discovery, see PMTUD.
- If a UDP packet is sent that is larger than the destination can receive
or that exceeds the MTU on some network link towards the
destination, then it is (generally) fragmented if the
Don't-Fragmentflag is not set. This fragmentation is done where a mismatch is detected: this could be at an intermediate router or even at the sender itself if a packet is sent that is larger than the MTU.
- TCP negotiates the maximum segment size (MSS) during connection setup time. Packets are then segmented into the smaller MTU size of both endpoints of the connection.
VMs and MTU settings
Linux VMs based on Google-provided OS images automatically have their interface MTU set to the MTU of the VPC network when they are created. If a VM has multiple network interfaces, each interface is set to the MTU of the attached network. If you change the MTU of a VPC that has running VMs, you must stop and then start those VMs to pick up the new MTU. When the VMs start up again, the changed network MTU is communicated to them from DHCP.
Windows VMs do not automatically configure their interfaces to use the
VPC network's MTU when they start. Instead, Windows VMs based
on Google-provided OS images are configured with a
fixed MTU of
1460. To set Windows VMs based on Google-provided OS images to
an MTU of
1500, do the following on each Windows VM:
- Open Command Prompt (cmd.exe) as Administrator.
Run the following command to determine the index of the interface that you want to update:
netsh interface ipv4 show interface
Update the interface:
netsh interface ipv4 set interface INTERFACE_INDEX mtu=1500 store=persistent
Restart the server for the changes to take effect:
shutdown /r /t 0
- Open PowerShell as Administrator.
Run the following command:
Set-NetIPInterface -InterfaceAlias INTERFACE_NAME -AddressFamily IPv4 -NlMtu 1500
Restart the server for the changes to take effect:
You can also use this procedure to set the MTU of custom Windows VMs, either to
1500, as appropriate to the network.
Verify MTU settings on any VMs that use custom images. It is possible that they might honor the VPC network's MTU, but it is also possible that their MTUs might be set to a fixed value.
For instructions, see Changing the MTU of a network.
Migrating services to a different MTU network
You might decide to migrate your services to new VMs in a new network rather than changing the MTU of your existing network. In such a case, you might have a server, such as a database server, that needs to be accessible to all VMs during the migration. If so, the following general approach might help you migrate cleanly:
- Create the new network with the new MTU.
- Create any necessary firewall rules and routes in the new network.
- Create a VM with multiple network interfaces in the old network. One interface connects to the new network using the new MTU and the other connects to the old network using the old MTU.
- Configure this new VM as a secondary server for the existing one.
- Fail the primary server over to the secondary one.
- Either Migrate VMs to the new network or create new VMs in the new network. If you create new VMs, you can create them from scratch, from an existing image, or by creating a snapshot of the existing VMs and using that to populate the new persistent disks.
- Configure these VMs to use the operational server in that network.
- Migrate traffic to the new VMs.
- If you intend to delete the old network, create a new server in the new network, get it in sync with the existing server, and fail over to it.
- Delete the old server and old network.
Consequences of mismatched MTUs
A mismatched MTU is defined as two communicating VM instances that have different MTU settings. This can, in a limited number of cases, cause connectivity problems. Specific cases involve the use of instances as routers and the use of Kubernetes inside VMs.
In most common scenarios, TCP connections established between instances with different MTUs are successful due to the MSS negotiation, where both ends of a connection will agree to use the lower of the two MTUs.
This applies whether the two VMs are in the same network or peered networks.
MTU differences with Cloud VPN
For information about Cloud VPN and MTU, see Tunnel MTU.
MTU differences with Cloud Interconnect
Cloud Interconnect can have an MTU of
If the communicating VMs have an MTU of
1500 and the Interconnect connection
has an MTU of
1440, MSS clamping reduces the MTU of TCP connections to
and TCP traffic proceeds.
MSS clamping does not affect UDP packets, so if the VPC network
has an MTU of
1500 and the Interconnect connection has an MTU of
UDP datagrams with more than 1412 bytes of data (1412 bytes UDP data + 8 byte
UDP header + 20 byte IPv4 header = 1440) are dropped. In such a case, you can
do one of the following:
- Lower the MTU of the attached VPC network to 1460.
- Adjust your application to send smaller UDP packets.
- Create a new Interconnect connection of 1500 bytes
For more information about Cloud Interconnect and MTU, see Cloud Interconnect MTU.
The measured inter-region latency for Google Cloud networks can be found in our live dashboard. The dashboard shows Google Cloud's median inter-region latency and throughput performance metrics and methodology to reproduce these results using PerfKit Benchmarker.
Google Cloud typically measures round-trip latencies less than 55 μs at the 50th percentile and tail latencies less than 80μs at the 99th percentile between c2-standard-4 VM instances in the same zone.
Google Cloud typically measures round-trip latencies less than 45μs at the 50th percentile and tail latencies less than 60μs at the 99th percentile between c2-standard-4 VM instances in the same low-latency network ("compact" placement policy). Compact placement policy lowers the network latency by ensuring that the VMs are located physically within the same low-latency network.
Methodology: Intra-zone latency is monitored via a blackbox prober that constantly runs netperf TCP_RR benchmark between a pair of c2-types VMs in every zone c2 instances are available. It collects P50 and P99 results for setup with and without compact placement policy. TCP_RR benchmark measures request/response performance by measuring the transaction rate. If your applications require best possible latency, c2 instances are recommended.
Google Cloud tracks cross-region packet loss by regularly measuring round-trip loss between all regions. We target the global average of those measurements to be lower than 0.01% .
Methodology: A blackbox vm-to-vm prober monitors the packet loss for every zone pair using pings and aggregates the results into one global loss metric. This metric is tracked with a one-day window.
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how VPC performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.Try VPC free