VLANs and subnets on VMware Engine
Google Cloud VMware Engine creates a network per region in which your VMware Engine service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets created in this region can communicate with each other without any additional configuration. You can create network segments (subnets) using NSX-T for your workload virtual machines (VMs).
Google creates a VLAN (Layer 2 network) for each private cloud. The Layer 2 traffic stays within the boundary of a private cloud, letting you isolate the local traffic within the private cloud. These VLANs are used for the management network. For workload VMs, you must create network segments on NSX-T Manager for your private cloud.
You must create a network segment on the NSX-T manager for your private cloud. A single private Layer 3 address space is assigned per customer and region. You can configure any IP address range that doesn't overlap with other networks in your private cloud, your on-premises network, your private cloud management network, or subnet IP address ranges in your Virtual Private Cloud (VPC) network. For a detailed breakdown of how VMware Engine allocates subnet IP address ranges, see Networking requirements.
All subnets can communicate with each other by default, reducing the configuration overhead for routing between private cloud. East-west data across private clouds in the same region stays in the same Layer 3 network and transfers over the local network infrastructure within the region. No egress is required for communication between private clouds in a region. This approach eliminates any WAN/egress performance penalty in deploying different workloads in different private clouds of the same project.
Management subnets created on a private cloud
When you create a private cloud, VMware Engine creates the following management subnets:
- System management: VLAN and subnet for ESXi hosts' management network, DNS server, vCenter Server
- VMotion: VLAN and subnet for ESXi hosts' vMotion network
- VSAN: VLAN and subnet for ESXi hosts' vSAN network
- NsxtEdgeUplink1: VLAN and subnet for VLAN uplinks to an external network
- NsxtEdgeUplink2: VLAN and subnet for VLAN uplinks to an external network
- NsxtEdgeTransport: VLAN and subnet for transport zones control the reach of Layer 2 networks in NSX-T
- NsxtHostTransport: VLAN and subnet for host transport zone
HCX deployment network CIDR range
When you create a private cloud on VMware Engine, HCX is installed on the private cloud automatically. You can specify a network CIDR range for use by HCX components. The CIDR range prefix must be /26 or /27.
The network provided is split into three subnets. HCX manager is installed in the HCX management subnet. The HCX vMotion subnet is used for vMotion of virtual machines between your on-premises environment and VMware Engine private cloud. The HCX WANUplink subnet is used for establishing the tunnel between your on-premises environment and VMware Engine private cloud.
Recommended MTU settings
The maximum transmission unit (MTU) is the size, in bytes, of the largest packet supported by a network layer protocol, including both headers and data. To avoid fragmentation-related issues, we recommend the following MTU settings.
For VMs that communicate only with other endpoints within a private cloud, you can use MTU settings up to 8800 bytes.
For VMs that communicate to or from a private cloud without encapsulation, use the standard 1500 byte MTU setting. This common default setting is valid for VM interfaces that send traffic in the following ways:
- From a VM in a private cloud to a VM in another private cloud
- From an on-premises endpoint to a private cloud
- From a VM in a private cloud to an on-premises endpoint
- From the internet to a private cloud
- From a VM in a private cloud to the internet
For VMs that communicate to or from a private cloud with encapsulation, calculate the best MTU setting based on VPN endpoint configurations. This generally results in an MTU setting of 1350–1390 bytes or lower for VM interfaces that send traffic in the following ways:
- From an on-premises endpoint to a private cloud with encapsulation
- From a private cloud VM to an on-premises endpoint with encapsulation
- From a VM in one private cloud to a VM in another private cloud with encapsulation
These recommendations are especially important in cases where an application isn't able to control the maximum payload size. For additional guidance on calculating encapsulation overhead, see the following resources: