Private cloud VMware components

A private cloud is an isolated VMware stack (ESXi hosts, vCenter, vSAN, and NSX) environment managed by a vCenter Server in a management domain. Google Cloud VMware Engine deploys private clouds with the following VMware stack components:

  • VMware ESXi: hypervisor on dedicated nodes
  • VMware vCenter: centralized management of private cloud vSphere environment
  • VMware vSAN: hyper-converged, software-defined storage platform
  • VMware NSX Data Center: network virtualization and security software
  • VMware HCX: application migration and workload rebalancing across data centers and clouds

You can retrieve generated sign-in credentials for VMware stack components from the private cloud details page.

VMware component versions

A private cloud VMware stack has the following software versions:

Component Version Licensed version
ESXi 7.0 Update 2c vSphere Enterprise Plus
vCenter 7.0 Update 2d vCenter Standard
vSAN 7.0 Update 2c Advanced + select vSAN Enterprise features
NSX Data Center 3.1.2 Select features available. See the NSX Data Center section for details.
HCX 4.5 Enterprise
1VMware Engine deploys a version of HCX made available to Google Cloud by VMware. Update HCX after private cloud creation to retrieve the latest version of HCX for your environment.

ESXi

When you create a private cloud, VMware ESXi is installed on provisioned Google Cloud VMware Engine nodes. ESXi provides the hypervisor for deploying workload virtual machines (VMs). Nodes provide hyper-converged infrastructure (compute and storage) and are a part of the vSphere cluster on your private cloud.

Each node has four physical network interfaces connected to the underlying network. VMware Engine creates a vSphere distributed switch (VDS) on the vCenter using these physical network interfaces as uplinks. Network interfaces are configured in active mode for high availability.

vCenter Server Appliance

vCenter Server Appliance (VCSA) provides the authentication, management, and orchestration functions for VMware Engine. When you create and deploy your private cloud, VMware Engine deploys a VCSA with an embedded Platform Services Controller (PSC) on the vSphere cluster. Each private cloud has its own VCSA. Adding nodes to a private cloud adds nodes to the VCSA.

vCenter Single Sign-On

The embedded platform services controller on VCSA is associated with a vCenter Single Sign-On. The domain name is gve.local. To access vCenter, use the default user, CloudOwner@gve.local, which is created for you to access vCenter. You can add your on-premises/Active Directory identity sources for vCenter.

vSAN storage

Clusters in private clouds have fully configured all-flash vSAN storage. The all-flash storage is provided by local SSDs. At least three nodes of the same SKU are required to create a vSphere cluster with a vSAN datastore. Each node of the vSphere cluster has two disk groups. Each disk group contains one cache disk and three capacity disks.

You can enable [deduplication and compression]vmware deduplication compression on the vSAN datastore in VMware Engine.This service enables vSAN compression by default when a new cluster is created. Each cluster on your private cloud contains a vSAN datastore. If the stored virtual machine data isn't suitable for vSAN space efficiency by deduplication and compression or only by compression, you can change vSAN space efficiency to the desired configuration on the individual vSAN datastore.

In addition to vSAN Advanced features, VMware Engine also provides access to vSAN Enterprise data encryption for data at rest and data in transit.

vSAN storage policies

A vSAN storage policy defines the Failures to tolerate (FTT) and the Failure tolerance method. You can create new storage policies and apply them to VMs. To maintain SLA, you must maintain 20% spare capacity on the vSAN datastore.

On each vSphere cluster, there's a default vSAN storage policy that applies to the vSAN datastore. The storage policy determines how to provision and allocate VM storage objects within the datastore to guarantee a level of service.

The following table shows the default vSAN storage policy parameters:

FTT Failure tolerance method Number of nodes in vSphere cluster
1 RAID 1 (mirroring)
Creates 2 copies
3 and 4 nodes
2 RAID 1 (mirroring)
Creates 3 copies
5 to 32 nodes

Supported vSAN storage policies

The following table shows the supported vSAN storage policies and the minimum number of nodes required to enable the policy:

FTT Failure tolerance method Minimum number of nodes required in vSphere cluster
1 RAID 1 (mirroring) 3
1 RAID 5 (erasure coding) 4
2 RAID 1 (mirroring) 5
2 RAID 6 (erasure coding) 6
3 RAID 1 (mirroring) 7

NSX Data Center

NSX Data Center provides network virtualization, micro segmentation, and network security capabilities on your private cloud. You can configure services supported by NSX Data Center on your private cloud by using NSX.

Available features

The following list describes NSX-T features supported by VMware Engine, organized by category:

  • Switching, DNS, DHCP, and IPAM (DDI):
    • Optimized ARP learning and broadcast suppression
    • Unicast replication
    • Head-end replication
    • SpoofGuard
    • IP address management
    • IP blocks
    • IP subnets
    • IP pools
    • IPv4 DHCP server
    • IPv4 DHCP relay
    • IPv4 DHCP static bindings/fixed addresses
    • IPv4 DNS relay/DNS proxy
  • Routing:
    • Null routes
    • Static routing
    • Device routing
    • BGP route controls using route maps and Prefix-lists
  • NAT:
    • NAT on North/South and East/West logical routers
    • Source NAT
    • Destination NAT
    • N:N NAT
  • Firewall:
    • Edge Firewall
    • Distributed Firewall
    • Common firewall user interface
    • Firewall sections
    • Firewall logging
    • Stateful Layer 2 and Layer 3 firewall rules
    • Tag-based rules
    • Distributed firewall-based IPFIX
  • Firewall policies, tags, and groups:
    • Object tagging/security tags
    • Network-centric grouping
    • Workload-centric grouping
    • IP-based grouping
    • MAC-based grouping
  • VPN:
    • Layer 2 VPN
    • Layer 3 VPN (IPv4)
  • Integrations:
    • Container networking and security using Tanzu Kubernetes Grid (TKG) only
      • VMware Cloud Director service
    • VMware Aria Automation
    • VMware Aria Operations for Logs
  • Authentication and authorization:
    • Direct Active Directory integration using LDAP
    • Authentication using OpenLDAP
    • Role-based access control (RBAC)
  • Automation:
    • REST API
    • Java SDK
    • Python SDK
    • Terraform provider
    • Ansible modules
    • OpenAPI/Swagger specifications and auto-generated API documentation for REST API
  • Inspection:
    • Port mirroring
    • Traceflow
    • Switch-based IPFIX

Feature limitations

Some NSX Data Center features have very specific networking and security use cases. Customers who created their Google Cloud account on or before August 30, 2022 can request access to features for those use cases by reaching out to Cloud Customer Care.

The following table describes those features, their corresponding use cases, and potential alternatives:

Feature Use case Recommended alternative Google Cloud customers on or before August 30, 2022 Google Cloud customers after August 30, 2022
Layer 3 multicast Multi-hop Layer 3 multicast routing Layer 2 multicast is supported within a NSX-T subnet. This allows for all multicast traffic to be delivered to workloads on the same NSX-T subnet. Supported Unsupported
Quality of Service (QoS) VoIP and latency sensitive application where network oversubscription occurs None required, as VMware Engine delivers a non-oversubscribed network architecture. Further, any QoS tags exiting a private cloud are stripped when entering the VPC through a peering connection. Supported Unsupported
Simple Network Management Protocol (SNMP) traps Legacy alerting protocol for notifying users of events Events and alarms can be configured within NSX-T using modern protocols. Supported Unsupported
NAT features such as stateless NAT, NAT logging, and NAT64 Used for carrier-grade NAT in large telecommunication deployments NSX-T supports source/destination NAT and N:N NAT on North/South and East/West logical routers. Supported Unsupported
Intent-based networking and security policies Used in conjunction with VMware Aria to create business-based firewall policies within NSX-T NSX-T Gateway and Distributed Firewall features can be used to create and enforce security policies. Supported Unsupported
Identity-based groups using Active Directory VDI deployments where the user logged into a specific VDI guest can be detected and receive a custom set of NSX-T firewall rules Users can be assigned specific workstations using the dedicated-assignment pool. Use NSX-T tags to then apply specific firewall rules by pool. Supported Unsupported
Layer 7 attribute (App ID) rules Used in NSX-T firewall rules Use NSX-T Service Groups to define a set of ports and services for easy reference when creating one or more firewall rules. Supported Unsupported
Stateless Layer 2 and Layer 3 firewall rules Used for carrier-grade high speed firewalls in large telecommunication deployments NSX-T supports stateful high-performance Layer 2 and Layer 3 rules. Supported Unsupported
NSX-T service insertion Used to automate the North/South or East/West deployment of third-party network services by using NSX-T to secure and inspect traffic For third-party security vendor deployments, VMware Engine recommends a routed model over service insertion to ensure that routine service upgrades do not impact network availability. Contact Cloud Customer Care Unsupported

HCX

VMware Engine handles initial installation, configuration, and monitoring of HCX in private clouds. You are responsible for lifecycle management of HCX Cloud and service appliances like HCX-IX Interconnect.

VMware provides updates for HCX Cloud through its HCX service. You can upgrade HCX Manager and deployed HCX service appliances from the HCX Cloud interface. To find the end of support date for a product release, refer to the VMware Product Lifecycle Matrix.

vSphere cluster

To ensure high availability of the private cloud, ESXi hosts are configured as a cluster. When you create a private cloud, VMware Engine deploys management components of vSphere on the first cluster. VMware Engine creates a resource pool for management components, and deploys all management VMs in this resource pool.

The first cluster cannot be deleted to shrink the private cloud. The vSphere cluster uses vSphere HA to provide high availability for VMs. Failures to tolerate (FTT) are based on the number of available nodes in the cluster. The formula Number of nodes = 2N+1, where N is the FTT, describes the relationship between available nodes in a cluster and FTT.

For production workloads, use a private cloud that contains at least 3 nodes.

Single node private clouds

For testing and proofs of concept with VMware Engine, you can create a private cloud that contains only a single node and cluster. VMware Engine deletes private clouds that contain only 1 node after 60 days, along with any associated workload VMs and data.

You can resize a single node private cloud to contain 3 or more nodes. When you do so, VMware Engine initiates vSAN data replication and no longer attempts to delete the private cloud. A private cloud must contain at least 3 nodes and complete vSAN data replication to be eligible for coverage based on the SLA.

Features or operations that require more than 1 node won't work with a single node private cloud. For example, you won't be able to use vSphere Distributed Resource Scheduler (DRS) or High Availability (HA).

vSphere cluster limits

The following table describes vSphere cluster limits in private clouds that meet SLA requirements:

Resource Limit
Minimum number of nodes to create a private cloud (first cluster) 3
Minimum number of nodes to create a cluster 3
Maximum number of nodes per cluster 32
Maximum number of nodes per private cloud 110
Maximum number of clusters per private cloud 21

Guest operating system support

You can install a VM with any guest operating system supported by VMware for the ESXi version in your private cloud. For a list of supported guest operating systems, see the VMware Compatibility Guide for Guest OS.

VMware infrastructure maintenance

Occasionally it's necessary to make changes to the configuration of the VMware infrastructure. Currently, these intervals can occur every 1‑2 months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal usage of the services.

During a VMware maintenance interval, the following services continue to function without any effect:

  • VMware management plane and applications
  • vCenter access
  • All networking and storage
  • All cloud traffic

Updates and upgrades

Google is responsible for lifecycle management of VMware software (ESXi, vCenter, PSC, and NSX) in the private cloud.

Software updates include:

  • Patches: security patches or bug fixes released by VMware
  • Updates: minor version change of a VMware stack component
  • Upgrades: major version change of a VMware stack component

Google tests a critical security patch as soon as it becomes available from VMware. Per SLA, Google rolls out the security patch to private cloud environments within a week.

Google provides quarterly maintenance updates to VMware software components. For a new major version of VMware software version, Google works with customers to coordinate a suitable maintenance window for upgrade.

External Storage

You can expand the storage capacity of a Google Cloud VMware Engine cluster by adding more nodes. Alternatively, you can use external storage if you only want to scale storage. Scaling storage increases the storage capacity without increasing the compute capacity of the cluster, allowing you to scale your resource independently.

Contact Google Support or your sales representative for more information about using external storage.

What's next