Google Cloud VMware Engine creates a network per region in which your VMware Engine service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets created in this region can communicate with each other without any additional configuration. You can create network segments (subnets) using NSX-T for your workload virtual machines (VMs).
Google creates a VLAN (Layer 2 network) for each private cloud. The Layer 2 traffic stays within the boundary of a private cloud, letting you isolate the local traffic within the private cloud. These VLANs are used for the management network. For workload VMs, you must create network segments on NSX-T Manager for your private cloud.
You must create a network segment on the NSX-T manager for your private cloud. A single private Layer 3 address space is assigned per customer and region. You can configure any RFC 1918 IP address range that doesn't overlap with other networks in your private cloud, your on-premises network, your private cloud management network, or subnet IP address ranges in your Virtual Private Cloud (VPC) network.
All subnets can communicate with each other by default, reducing the configuration overhead for routing between private cloud. East-west data across private clouds in the same region stays in the same Layer 3 network and transfers over the local network infrastructure within the region. No egress is required for communication between private clouds in a region. This approach eliminates any WAN/egress performance penalty in deploying different workloads in different private clouds of the same project.
vSphere/vSAN subnets CIDR range
VMware Engine deploys management components of a private cloud in the vSphere/vSAN subnets CIDR range provided during private cloud creation. Each private cloud requires a vSphere/vSAN subnet CIDR range, and the CIDR range is divided into different subnets during private cloud deployment. CIDR range limits cannot be changed after private cloud creation without deleting the private cloud.
The CIDR range prefix has the following requirements:
- Minimum vSphere/vSAN subnets CIDR range prefix: /24
- Maximum vSphere/vSAN subnets CIDR range prefix: /21
vSphere/vSAN subnets CIDR range limits
The vSphere/vSAN subnets CIDR range size affects the size of your private cloud. The following table shows the maximum number of nodes you can have, based on the size of the vSphere/vSAN subnets CIDR range.
|Specified vSphere/vSAN subnets CIDR prefix length||Maximum number of nodes|
When selecting your CIDR range prefix, consider the node limits on resources in a private cloud. For example, CIDR range prefixes of /24 and /23 don't support the maximum number of nodes available to a private cloud.
Management subnets created on a private cloud
When you create a private cloud, the following management subnets are created:
- System management: VLAN and subnet for ESXi hosts' management network, DNS server, vCenter Server
- VMotion: VLAN and subnet for ESXi hosts' vMotion network
- VSAN: VLAN and subnet for ESXi hosts' vSAN network
- NsxtEdgeUplink1: VLAN and subnet for VLAN uplinks to an external network
- NsxtEdgeUplink2: VLAN and subnet for VLAN uplinks to an external network
- NsxtEdgeTransport: VLAN and subnet for transport zones control the reach of Layer 2 networks in NSX-T
- NsxtHostTransport: VLAN and subnet for host transport zone
Management network CIDR range breakdown
The vSphere/vSAN subnets CIDR range you specify is divided into multiple subnets. The following table shows an example of the breakdown for allowed prefixes. The example uses 192.168.0.0 as the CIDR range.
|Specified vSphere/vSAN subnets CIDR/prefix||192.168.0.0/21||192.168.0.0/22||192.168.0.0/23||192.168.0.0/24|
|NSX-T host transport||192.168.4.0/23||192.168.2.0/24||192.168.1.0/25||192.168.0.128/26|
|NSX-T edge transport||192.168.7.208/28||192.168.3.208/28||192.168.1.208/28||192.168.0.208/28|
|NSX-T edge uplink1||192.168.7.224/28||192.168.3.224/28||192.168.1.224/28||192.168.0.224/28|
|NSX-T edge uplink2||192.168.7.240/28||192.168.3.240/28||192.168.1.240/28||192.168.0.240/28|
HCX deployment network CIDR range
When you create a private cloud on VMware Engine, HCX is installed on the private cloud automatically. You can specify a network CIDR range for use by HCX components. The CIDR range must be /27 or higher.
The network provided is split into three subnets. HCX manager is installed in the HCX management subnet. The HCX vMotion subnet is used for vMotion of virtual machines between your on-premises environment and VMware Engine private cloud. The HCX WANUplink subnet is used for establishing the tunnel between your on-premises environment and VMware Engine private cloud.
Recommended MTU settings
The maximum transmission unit (MTU) is the size, in bytes, of the largest packet supported by a network layer protocol, including both headers and data. To avoid fragmentation-related issues, we recommend the following MTU settings.
For VMs that communicate only with other endpoints within a private cloud, you can use MTU settings up to 8800 bytes.
For VMs that communicate to or from a private cloud without encapsulation, use the standard 1500 byte MTU setting. This common default setting is valid for VM interfaces that send traffic in the following ways:
- From a VM in a private cloud to a VM in another private cloud
- From an on-premises endpoint to a private cloud
- From a VM in a private cloud to an on-premises endpoint
- From the internet to a private cloud
- From a VM in a private cloud to the internet
For VMs that communicate to or from a private cloud with encapsulation, calculate the best MTU setting based on VPN endpoint configurations. This generally results in an MTU setting of 1350–1390 bytes or lower for VM interfaces that send traffic in the following ways:
- From an on-premises endpoint to a private cloud with encapsulation
- From a private cloud VM to an on-premises endpoint with encapsulation
- From a VM in one private cloud to a VM in another private cloud with encapsulation
These recommendations are especially important in cases where an application isn't able to control the maximum payload size. For additional guidance on calculating encapsulation overhead, see the following resources: