VLANs and subnets on VMware Engine

Google Cloud VMware Engine creates a network per region in which your VMware Engine service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets created in this region can communicate with each other without any additional configuration. You can create network segments (subnets) using NSX-T for your workload virtual machines (VMs).

VLANs and subnets.

Management VLANs

Google creates a VLAN (Layer 2 network) for each private cloud. The Layer 2 traffic stays within the boundary of a private cloud, letting you isolate the local traffic within the private cloud. These VLANs are used for the management network. For workload VMs, you must create network segments on NSX-T Manager for your private cloud.

Subnets

You must create a network segment on the NSX-T manager for your private cloud. A single private Layer 3 address space is assigned per customer and region. You can configure any IP address range that doesn't overlap with other networks in your private cloud, your on-premises network, your private cloud management network, or subnet IP address ranges in your Virtual Private Cloud (VPC) network. For a detailed breakdown of how VMware Engine allocates subnet IP address ranges, see Networking requirements.

All subnets can communicate with each other by default, reducing the configuration overhead for routing between private cloud. East-west data across private clouds in the same region stays in the same Layer 3 network and transfers over the local network infrastructure within the region. No egress is required for communication between private clouds in a region. This approach eliminates any WAN/egress performance penalty in deploying different workloads in different private clouds of the same project.

Management subnets created on a private cloud

When you create a private cloud, VMware Engine creates the following management subnets:

  • System management: VLAN and subnet for ESXi hosts' management network, DNS server, vCenter Server
  • VMotion: VLAN and subnet for ESXi hosts' vMotion network
  • VSAN: VLAN and subnet for ESXi hosts' vSAN network
  • NsxtEdgeUplink1: VLAN and subnet for VLAN uplinks to an external network
  • NsxtEdgeUplink2: VLAN and subnet for VLAN uplinks to an external network
  • HCXUplink: Used by HCX IX (mobility) and NE (extension) appliances to reach their peers and enable the creation of the HCX Service Mesh.
  • NsxtHostTransport: VLAN and subnet for host transport zone

HCX deployment network CIDR range

When you create a private cloud on VMware Engine, HCX is installed on the private cloud automatically. You can specify a network CIDR range for use by HCX components. The CIDR range prefix must be /26 or /27.

The network provided is split into three subnets. HCX manager is installed in the HCX management subnet. The HCX vMotion subnet is used for vMotion of virtual machines between your on-premises environment and VMware Engine private cloud. The HCX WANUplink subnet is used for establishing the tunnel between your on-premises environment and VMware Engine private cloud.

Service subnets

When you create a private cloud, VMware Engine automatically creates additional service subnets. You can target service subnets for appliance or service deployment scenarios, such as storage, backup, disaster recover (DR), media streaming, and providing high scale linear throughput and packet processing for even the largest scaled private clouds. The service subnet names are as follows:

  • service-1
  • service-2
  • service-3
  • service-4
  • service-5

Virtual Machine communication across a service subnet exits the VMware ESXi host directly into the Google Cloud networking infrastructure, enabling high speed communication.

Configuring service subnets

When VMware Engine creates a service subnet, it does not allocate a CIDR range or prefix. You must specify a non-overlapping CIDR range and prefix. The first usable address will become the gateway address. To allocate a CIDR range and prefix, edit one of the service subnets.

Service subnets can be updated if CIDR requirements change. Modification of an existing service subnet CIDR may cause network availability disruption for VMs attached to that service subnet.

Configuring vSphere distributed port groups

To connect a VM to a service subnet, you need to create a new Distributed Port Group. This group maps the service subnet ID to a network name within a vCenter private cloud.

To do this, navigate to the network configuration section of the vCenter interface, select Datacenter-dvs, and then select New Distributed Port Group.

After the distributed port group has been created, you can attach VMs by selecting the corresponding name in the network configuration of the VM properties.

The following are Distributed Port Group critical configuration values:

  • Port binding: static binding
  • Port allocation: elastic
  • Number of ports: 120
  • VLAN type: VLAN
  • VLAN ID: the corresponding subnet ID within the subnets section of the Google Cloud VMWare Engine interface

The maximum transmission unit (MTU) is the size, in bytes, of the largest packet supported by a network layer protocol, including both headers and data. To avoid fragmentation-related issues, we recommend the following MTU settings.

For VMs that communicate only with other endpoints within a private cloud, you can use MTU settings up to 8800 bytes.

For VMs that communicate to or from a private cloud without encapsulation, use the standard 1500 byte MTU setting. This common default setting is valid for VM interfaces that send traffic in the following ways:

  • From a VM in a private cloud to a VM in another private cloud
  • From an on-premises endpoint to a private cloud
  • From a VM in a private cloud to an on-premises endpoint
  • From the internet to a private cloud
  • From a VM in a private cloud to the internet

For VMs that communicate to or from a private cloud with encapsulation, calculate the best MTU setting based on VPN endpoint configurations. This generally results in an MTU setting of 1350–1390 bytes or lower for VM interfaces that send traffic in the following ways:

  • From an on-premises endpoint to a private cloud with encapsulation
  • From a private cloud VM to an on-premises endpoint with encapsulation
  • From a VM in one private cloud to a VM in another private cloud with encapsulation

These recommendations are especially important in cases where an application isn't able to control the maximum payload size. For additional guidance on calculating encapsulation overhead, see the following resources: