Private cloud networking for Google Cloud VMware Engine

First published: January 25, 2021

This document provides an overview of Google Cloud VMware Engine, discusses networking concepts, reviews the most typical traffic flows, and provides some considerations for using VMware Engine to design an architecture. The document explains how VMware Engine works, what its capabilities are, and what to consider when deciding on the best architecture.

This document is relevant to you if you are a network engineer, cloud engineer, architect, or operator tasked with designing, maintaining, or troubleshooting the security, connectivity, and availability of VMware-backed applications hosted in Google Cloud.

This document also helps you to learn more about VMware Engine and its requirements and capabilities. As you dive deeper into the technology, or if you are going to pilot it or deploy it in production, this document helps you understand how VMware Engine works and how to integrate it with a new or existing Google Cloud environment. This document reviews all networking aspects and helps you determine the best solution for your use case.

VMware Engine networking integrates with virtual private cloud (VPC) networks that have existing connectivity with on-premises networks, and also with other Google Cloud services. VMware Engine is built on high-performance, reliable, and high-capacity infrastructure, to provide customers with an optimal cost-efficient VMware experience.

Overview

VMware Engine is a service provided by Google that helps you to migrate and run your VMware workloads on Google Cloud.

VMware Engine delivers a fully managed VMware software-defined data center (SDDC), which consists of the following components: VMware vSphere, vCenter Server, vSAN, and NSX-T. VMware Engine includes HCX for cloud migration in a dedicated environment on Google Cloud to support enterprise production workloads. You can use VMware Engine to migrate or extend your on-premises workloads to Google Cloud by connecting to a dedicated VMware environment directly through the Google Cloud Console. This capability lets you migrate to the cloud without the cost or complexity of refactoring applications, and lets you run and manage workloads consistently with your on-premises environment. When you run your VMware workloads on Google Cloud, you reduce your operational burden while benefiting from scale and agility, and maintain continuity with your existing tools, policies, and processes.

VMware Engine is built on Google Cloud's highly performant, scalable infrastructure with fully redundant and dedicated 100 Gbps networking, providing up to 99.99% availability to meet the needs of your VMware stack. Cloud networking services such as Cloud Interconnect and Cloud VPN provide access from your on-premises environments to the cloud. The high bandwidth of these connections to cloud services is optimized for performance and flexibility while minimizing costs and operational overhead. End-to-end, one-stop support is integrated to provide a seamless experience across this service and the rest of Google Cloud.

After you move workloads, you have ready access to Google Cloud services like BigQuery, Cloud Operations, Cloud Storage, Anthos, and Cloud AI. Google Cloud also offers fully integrated billing and identity management and access control to unify your experience with other Google Cloud products and services.

Use cases

The following diagram is a representative reference architecture that shows how you can migrate or extend your VMware environment to Google Cloud while also benefiting from Google Cloud services. VMware Engine offers solutions for the following use cases.

Reference architecture that shows how to migrate or extend your VMware environment to Google Cloud.

Onboarding requirements

Before you begin your VMware Engine deployment to Google Cloud, make sure that you read the requirements for onboarding.

System components

At a high level, the VMware Engine components are the following:

  • Google Cloud:
    • VMware Engine:
      • NSX-T
      • HCX
      • vCenter
      • vSAN
    • Your Google Cloud Organization:
      • Your Google Cloud project that has a VPC network
      • Cloud Interconnect using Partner Interconnect or Dedicated Interconnect or Cloud VPN connection to on-premises systems
      • Private services access connection to VMware Engine region
    • Private services access connection
    • Google managed services integration
  • (Optional) On-premises resources:
    • Networking
    • Storage
    • HCX (recommended for L2 connections, also known as L2 stretch)
    • vCenter

A private cloud is an isolated VMware stack that consists of ESXi hosts, vCenter, vSAN, NSX-T, and HCX. These components are collectively known as the Google Cloud VMware Engine components, and they are deployed when the cloud administrator creates a private cloud. Users in your organization can then privately access VMware Engine private clouds from their VPC networks by establishing a private services access connection. The following diagram shows this architecture.

Private services access.

Private services access is a private connection between your VPC network and a network owned by Google or a third party. Google or the third party—entities that are offering services— are also known as service producers.

For each customer VPC network connected to VMware Engine, there is a service producer VPC network that is created when customers create a private services access connection in Google Cloud. This project contains a Shared VPC network that can be used to connect to other Google Cloud services, such as Cloud SQL and Memorystore.

VMware doesn't require you to use NSX-T on-premises. Additionally, using HCX isn't mandatory for any of the use cases, because you can use other mechanisms to achieve layer 2 (L2) stretch and workload migration. However, we recommend HCX for efficient workload migration and convenience because this feature is deployed automatically when you create your private cloud. In other words, HCX will be deployed whether you use it or not.

Networking capabilities

Following is a summary of the networking capabilities in VMware Engine:

  • Subnets: You can create management and workload subnets in your private clouds.
  • Dynamic routing: VMware Engine subnets administered by NSX are automatically advertised to the rest of the Google network, as well as to your on-premises locations, through Border Gateway Protocol (BGP).
  • Public IP service and Internet Gateway (ingress and egress): VMware Engine has its own public IP service for ingress and Internet Gateway egress. This is a regional service.
  • Firewall policies: VMware Engine lets you create L4 and L7 firewall policies using the NSX distributed firewall (east-west) and the NSX gateway firewall (north-south). For example, you can enforce granular controls to access workloads with public IP addresses, like web servers.
  • Service chaining for east-west security: Provides the ability to register a partner security solution (IDS, IPS, or NGFW) to configure network services to inspect east-west traffic moving between VMs.
  • NSX-T Distributed Intrusion Detection Service (IDS): The version of NSX running in VMware Engine supports NSX Distributed IDS for threat detection and reporting. This capability requires an additional license from VMware. For more information, see VMware NSX Distributed IDS/IPS.
  • Endpoint protection: VMs backed by VMware Engine can be protected against malware and viruses through partner integration with supported third parties. For more information, see the official VMware documentation.
  • Private access to other Google services: VMware Engine integrates with other Google Cloud services using Private Google Access and private services access. For a complete list of supported services, see Private access options for services.
  • DNS profiles
    • DNS for management: For every private cloud, VMware Engine automatically deploys a pair of DNS servers to resolve the various management components (vCenter, NSX Manager, and so on).
    • DNS for workloads: For every private cloud, you decide what DNS setup is best for you. The NSX-T DNS service lets you forward DNS queries to certain DNS servers, and you can create your DNS server in VMware Engine or on-premises, or you can use Cloud DNS.
  • DHCP server: DHCP server support for NSX-T segments is included.
  • DHCP relay: DHCP relay lets organizations integrate with an on-premises IP address management (IPAM) system, for example.
  • Point-to-site VPN: The point-to-site VPN gateway lets you remotely connect to your VMware Engine network from a client computer to the private cloud. To connect to VMware Engine from your computer, you need a VPN client. You can download the OpenVPN client for Windows or Viscosity for macOS.
  • Site-to-site L2 VPN and L3 VPN: These services are directly configured in NSX using L2VPN and IPsec VPN, respectively.
  • Load balancing: This service uses the built-in load balancing capabilities of NSX-T, which includes L4 and L7 support as well as health checks.
  • Partial IP Multicast routing support: Protocol Independent Multicast (PIM) is supported, as described in the VMware documentation.
  • Partial IPv6 support: This feature lets organizations take advantage of IPv6 for their VMware Engine–backed applications, as described in the NSX-T 3.0 Design Guide.
  • Long distance VM migration (vMotion): Both live (applications continue to run) and cold (applications are powered off) migrations are supported between on-premises and cloud, or cloud to cloud within VMware Engine with embedded WAN optimization, encryption, and replication-backed mobility. This is possible because of VMware HCX, which is included with the service.
  • Advanced network operations: You can use built-in NSX tools and instrumentation, such as port mirroring, Traceflow, packet capture, and SNMP v1/v2/v3 with polling and traps, among others.

Networks and address ranges

Google Cloud VMware Engine creates a network for each region in which your VMware Engine service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets that you create in this region can communicate with each other without any additional configuration. You can create network segments (subnets) using NSX-T for your workload virtual machines. You can configure any RFC 1918 address range that doesn't overlap with on-premises networks, the management network of your private cloud, or any VPC subnets.

By default, all subnets can communicate with each other, reducing the configuration overhead for routing between the private clouds. Data traffic across private clouds in the same region stays in the same Layer 3 network and transfers over the local network infrastructure within the region, so egress isn't required for communication between these private clouds. This approach eliminates any WAN or egress performance penalty when you deploy different workloads in different private clouds.

Conceptually, a private cloud is created as an isolated VMware stack (ESXi hosts, vCenter, vSAN, and NSX) environment managed by a vCenter server. The management components are deployed in the network selected for the vSphere/vSAN subnets CIDR range. The network CIDR range is divided into different subnets during the deployment.

Management subnets in VMware Engine

VMware Engine uses several IP address ranges. Some of the IP address ranges are mandatory and some depend on the services that you plan to deploy. The IP address space must not overlap with any of your on-premises subnets, VPC subnets, or planned workload subnets. The following sections describe the set of address ranges and corresponding services that use those ranges.

The following diagram provides an overview of the management subnets for the services explained in the following sections:

Management subnets.

vSphere/vSAN CIDR

The following IP address ranges are required to initialize and create a private cloud:

Name Description Address range
vSphere/vSAN CIDR Required for VMware Management networks. Must be specified during private cloud creation. Also required for vMotion. /21, /22, /23, or /24

When you create a private cloud, several management subnets are automatically created. The management subnets use the required vSphere/vSAN CIDR allocation described earlier in this document. Following is an example architecture and description of the different subnets created using this CIDR range:

  • System management: VLAN and subnet for the ESXi hosts' management network, DNS server, and vCenter server.
  • vMotion: VLAN and subnet for the ESXi hosts' vMotion network.
  • vSAN: VLAN and subnet for the ESXi hosts' vSAN network.
  • NsxtEdgeUplink1: VLAN and subnet for VLAN uplinks to an external network.
  • NsxtEdgeUplink2: VLAN and subnet for VLAN uplinks to an external network.
  • NsxtEdgeTransport: VLAN and subnet for transport zones that control the reach of Layer 2 networks in NSX-T.
  • NsxtHostTransport: VLAN and subnet for host transport zone.

Example:

The vSphere/vSAN subnets CIDR range that's specified is divided into multiple subnets. The following table provides an example of the breakdown for allowed prefixes (from /21 to /24) using 192.168.0.0 as the CIDR range.

Specified vSphere/vSAN subnets CIDR/prefix 192.168.0.0/21 192.168.0.0/22 192.168.0.0/23 192.168.0.0/24
System management 192.168.0.0/24 192.168.0.0/24 192.168.0.0/25 192.168.0.0/26
vMotion 192.168.1.0/24 192.168.1.0/25 192.168.0.128/26 192.168.0.64/27
vSAN 192.168.2.0/24 192.168.1.128/25 192.168.0.192/26 192.168.0.96/27
NSX-T host transport 192.168.4.0/23 192.168.2.0/24 192.168.1.0/25 192.168.0.128/26
NSX-T edge transport 192.168.7.208/28 192.168.3.208/28 192.168.1.208/28 192.168.0.208/28
NSX-T edge uplink1 192.168.7.224/28 192.168.3.224/28 192.168.1.224/28 192.168.0.224/28
NSX-T edge uplink2 192.168.7.240/28 192.168.3.240/28 192.168.1.240/28 192.168.0.240/28

Depending on the CIDR range you select, the subnet mask for each subnet changes. For example, if you select a vSphere/vSAN CIDR of /21, the following subnets are created: a /24 system management subnet, a /24 vMotion subnet, a /24 vSAN subnet, a /23 NSX-T host transport subnet, a /28 NSX-T edge transport subnet, a /28 NSX-T edge uplink1, and a /28 NSX-T edge uplink2.

HCX deployment CIDR range

The following IP address ranges are required for HCX on your private cloud:

Name Description Address range
HCX deployment CIDR range Required for deploying HCX networks. Optional during private cloud creation. /27 or larger

Assigned address range

The following IP addresses are required for private services access to VMware Engine:

Name Description Address range
Assigned address range Address range to be used for private service connection to Google Cloud services including VMware Engine. /24 or larger

Edge services and client subnet

The following IP address ranges are required for enabling edge networking services provided by VMware Engine:

Name Description Address range
Edge Services CIDR range Required if optional edge services, such as point-to-site VPN, internet access, and public IP address are enabled. Ranges are determined on a per-region basis. /26
Client subnet Required for point-to-site VPN. DHCP addresses are provided to the VPN connection from the client subnet. /24

Google private access options for services

Google Cloud provides several private access options for your workloads running in VMware Engine or in your Google Virtual Private Cloud networks. Private services access provides a private connection between your VPC network and the VMware Engine service. When you use Private Google Access, your VMware workloads can access other Cloud APIs and services without leaving the Google Cloud network.

Private services access

VMware Engine uses private services access to connect your VPC network to a service producer VPC network in a tenant folder in your Google Organization using private IP addressing.

For information about how to configure private services access, see Configuring private services access. Private services access creates a VPC Network Peering connection, so it's important to import and export routes.

Google and third parties (together known as service producers) can offer services with internal IP addresses that are hosted in a VPC network. Private services access lets you reach those internal IP addresses. The private connection enables the following functionality:

  • VM instances in your VPC network and the VMware VMs communicate exclusively by using internal IP addresses. VM instances don't need internet access or external IP addresses to reach services that are available through private services access.

  • Communication between VMware VMs and Google Cloud supported services that support private services access using internal IP addresses.

  • If you have on-premises connectivity using Cloud VPN or Cloud Interconnect to your VPC network, you can use existing on-premises connections to connect to your VMware Engine private cloud.

If you're using a shared VPC network in your own project, the allocated IP range and private connection must be created in the host project to allow VM instances in service projects to have connectivity with the environment.

You can set up private services access independently of VMware Engine private cloud creation. You can also create a private connection before or after you create the private cloud that you want to connect your VPC network to.

Therefore, when you configure private services access, you need to allocate an internal IP address range and then create a private connection as mentioned in the previous section. This allocated range is a reserved CIDR block that is used for the private services access connection and can't be used in your local VPC network. This range is set aside for service producers only and prevents overlap between your VPC network and the service producer's VPC network. When you create a private services connection, you must specify an allocation. For information about the service producer side, see Enabling private services access.

To avoid IP address overlaps, we recommend allocating all of the VMware Engine subnets in the private services connection. In the following screenshot, the VMware Engine CIDR block is used for the private service connection and the gcve-2 CIDR block is allocated to prevent IP address overlaps:

VMware Engine CIDR block is used for the private service connection.

Service Networking doesn't check for overlapping addresses on dynamic routes received, so you need to associate private services access with prefixes reserved for non-VMware services. This prevents issues caused by overlapping IP addresses because you're reserving CIDR ranges and you won't be able to use them in your VPC network.

When you configure private services access, make sure that the VPC peering connection is configured correctly to import and export all the routes on the servicenetworking-googleapis-com private connection. You also need to note the peered project ID so that you can use it when you set up a private connection in VMware Engine.

The service producer VPC network is automatically connected to the VMware Engine service, which contains your private cloud (a private vCenter and single NSX-T installation).

VMware Engine uses the same connection to multiple Google Cloud services provisioned within the service producer VPC that use private services access such as Cloud SQL, Memorystore for Redis, Memorystore for Memcached, AI Platform Training, and GKE private clusters. If you want to use these services, you can select the CIDR range that you used to establish the private service connection with VMware Engine.

In the VMware Engine service portal, when the region status is connected, you can review the private connection using its tenant project ID for the corresponding region. The private connection details display the routes learned over VPC peering. The exported routes display private clouds learned from the region and exported over VPC peering.

A private cloud represents a single vCenter and a single NSX-T installation with a maximum of 64 nodes. You can deploy multiple private clouds, and if you reach the 64-node limit for one private cloud, you can create another private cloud. This means that you manage two private clouds, two vCenter installations, and two NSX-T installations.

Depending on your use case, you can deploy a single private cloud or deploy multiple private clouds without reaching the 64-node limit. For example, you can deploy one private cloud with database workloads, and a separate private cloud for a VDI use case, or a private cloud for Americas workloads and a different private cloud for EMEA workloads. Alternatively, you can separate workloads in multiple clusters within the same private cloud, depending on your use case.

Private Google Access

Private Google Access allows you to connect to Google APIs and services without assigning external IP addresses to your VMware Engine VMs. After Private Google Access is configured, traffic is routed to the internet gateway, and then to the service requested without leaving the Google network.

For more information, see Private Google Access: a closer look.

Key traffic flows

This section reviews some key traffic flows and describes the architecture that is used to cover all of the different networking flows.

Following are examples of what to consider when you create a design for VMware Engine.

VMware Engine on-premises and remote user connectivity

Following are options that you can use to access the VMware Engine environment from an on-premises data center or from a remote location:

VPN gateways provide secure connectivity between multiple sites, such as on-premises, VPC networks, and private clouds. These encrypted VPN connections traverse the internet and are terminated in a VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

Point-to-site VPN gateways let users remotely connect to VMware Engine from their computers. Note that you can create only one point-to-site VPN gateway per region.

The Point-to-site VPN gateway allows TCP and UDP connections, and you can choose the protocol to use when you connect from your computer. Additionally, the configured client subnet is used for both TCP and UDP clients, and the range defined by the CIDR block is divided into two subnets, one for TCP and one for UDP clients.

The point-to-site VPN sends encrypted traffic between a VMware Engine region network and a client computer. You can use the point-to-site VPN to access your private cloud network, including your private cloud vCenter and workload VMs. For information about point-to-site VPN gateway setup, see Configuring VPN gateways on the VMware Engine network.

You can also use Cloud VPN for site-to-site VPN connectivity, or Cloud Interconnect to establish connections between your on-premises network and your VMware Engine private cloud. You provision Cloud VPN and Cloud Interconnect in your VPC network. For more information, see the Cloud VPN and Cloud Interconnect documentation.

Another option for VPN connectivity is NSX-T IPsec VPN, NSX-T L2 VPN, and HCX L2 VPN—for example, to configure an L2 stretch. A use case for NSX-T IPsec VPN is end-to-end encryption with VPN termination directly within your VMware Engine private cloud. For more information about NSX-T VPN capabilities, see the VMware Virtual Private Network documentation.

We recommend that you configure private services access in the VPC network where the Cloud Router or Cloud VPN is located (and where the VLAN attachments exist in case Border Gateway Protocol is being used); otherwise, VPC routes need to be configured. If the architecture contains multiple VPC peering connections, remember that VPC peering is not transitive.

The on-premises routes advertised either over Interconnect or VPN are automatically propagated through private services access if import and export of routes is configured. This requires you to manually edit import and export routes in the VPC Peering connection.

Also keep in mind that the routes learned through private services access aren't automatically propagated to on-premises systems, because VPC Network Peering doesn't support transitive routing; imported routes from other networks aren't automatically advertised by Cloud Routers in your VPC network. However, you can use custom IP range advertisements from Cloud Routers in your VPC network to share routes to destinations in the peer network. For Cloud VPN tunnels using static routing, you must configure static routes to the peer network's destination ranges in your on-premises network.

Ingress to VMware Engine

This section describes the following options for ingress to VMware Engine:

  • Ingress using the VMware Engine public IP service
  • Ingress from the customer VPC
  • Ingress from an on-premises data center

The option that you select depends on where you want to peer your Google Cloud infrastructure with the internet. The following diagram shows these ingress options.

Ingress options.

The following sections cover each option in detail.

Ingress using VMware Engine

When you use the VMware Engine internet gateway, on-demand public IP addresses can be created and deleted for resources inside the VMware Engine portal's private cloud. In this scenario, each public IP address corresponds to a unique configured private cloud IP address.

Public ingress can be provided through the public IP gateway, which is also responsible for NAT, so users coming from the public internet connect to the Google public IP address. The Google public IP address is translated to a virtual machine private IP address in VMware Engine (backed by NSX-T segments).

When you create firewall rules to allow inbound connections from outside to an exposed public IP, firewall rules are applied to a public IP gateway and those firewall rules need to be provisioned on the VMware Engine portal.

A tier-0 logical router is usually used for north-south routing, such as a virtual machine connecting to the internet. A tier-1 logical router is used for east-west routing, and you can configure multiple subnets for tier 1.

A public IP address lets internet resources communicate inbound to private cloud resources at a private IP address. The private IP address is a virtual machine or a software load balancer on your private cloud vCenter. The public IP address lets you expose services that are running on your private cloud to the internet.

A resource that is associated with a public IP address always uses the public IP address for internet access. By default, only outbound internet access is allowed on a public IP address, and incoming traffic on the public IP address is denied. In order to allow inbound traffic, create a firewall rule for the public IP address with the specific port.

Users in your organization can expose and allocate public IPs to specific nodes in their private cloud, such as VM workloads. The public IP address can be assigned to only one private IP address. The public IP address is dedicated to that private IP address until you unassign it. We recommend that you don't expose ESXi hosts or vCenter.

If you want to allocate a public IP, you need to provide a name, location, or region, as well the attached local address.

Ingress using a customer VPC network

You can provide ingress to VMware Engine through the customer VPC network by using Cloud Load Balancing. When you select the load balancer of your choice depending on the capabilities you require, you can create a managed instance group (MIG) or an unmanaged instance group as the backend for that load balancer to proxy traffic over the VPC peering connection. In this scenario, you could also use a third-party virtual network appliance from Google Cloud Marketplace.
You can combine Cloud Load Balancing with Bring Your Own IP (BYOIP) in case you want to use your own public IP space to Google, as well as with Google Cloud Armor to help protect your applications and websites against distributed denial of service (DDOS) and web attacks such as SQL injection or cross-site scripting.

Ingress using customer on-premises

To provide on-premises ingress, we recommend using Cloud Interconnect. For proof of concept or in case you have low throughput and no latency requirements, you can use Cloud VPN instead.

Egress from VMware Engine

There are multiple options for VMware Engine egress:

  • Egress through VMware Engine Internet Gateway
  • Egress through the customer VPC
  • Egress through an on-premises data center

In the following architecture you can see these options for an egress flow from a VMware Engine perspective.

Egress flow from a VMware Engine perspective.

Public egress using VMware Engine

You can configure internet access and public IP services for your workloads separately for each region. You can direct internet-bound traffic from your VMware workloads by using Google Cloud's internet edge or an on-premises connection.

The traffic from a virtual machine hosted in a VMware Engine private cloud destined to the public internet egresses through the tier-0 gateway. The tier-0 gateway forwards traffic to the internet gateway. The internet gateway performs source port address translation (PAT). The internet service is regional, meaning that it is enabled for each region separately.

Public egress using your VPC network

Alternatively, from the VMware Engine service portal, you can disable internet and public IP services and provide public egress from the customer VPC network. In this case, internet access is through your VPC network if you have advertised a default 0.0.0.0/0 route. If you want to use this packet flow, disable internet access for the VMware Engine region and inject a default 0.0.0.0/0 route.

You must also remove any allocated public IP addresses and point-to-site VPN gateways before you can egress traffic through your VPC network.

The default route must be visible in the customer VPC network, and then it will be automatically propagated to VMware Engine. A prerequisite is to enable VPC Service Controls on the VPC peering connection between your VPC and VMware Engine.

To perform network address translation (NAT), you can deploy a Compute Engine instance or have a default route 0.0.0.0/0 pointing to an internal load balancer that's paired with a centralized third-party virtual network appliance cluster (available on Cloud Marketplace) and perform source NAT in the instance to egress from your VPC network to the public internet. For more information, see how to use routes in your VPC network.

Public egress using an on-premises data center

You can egress through your on-premises data center if you disable internet and public IP services and provide public egress from on-premises. When you do this, internet access flows through your VPC network before it reaches the on-premises data center through Cloud VPN or Cloud Interconnect.

To implement public egress through your on-premises data center, you advertise a default 0.0.0.0/0 route and then enable VPC Service Controls on the peering connection so that the default route is imported properly, as described here. For more information about VPC Service Controls, see the official documentation.

If VPC Service Controls is disabled on the VPC peering connection, internet access through an on-premises connection is also disabled, even if a default route (0.0.0.0/0) is advertised and propagated over the VPC peering connection.

Service overview: private cloud to private cloud

Private clouds are managed through the VMware Engine service portal. Private clouds have their own vCenter server that is in its own management domain. The VMware stack runs on dedicated, isolated bare metal hardware nodes in Google Cloud locations. The private cloud environment is designed to eliminate single points of failure.

The following diagram shows the architecture and traffic flow when two private clouds communicate with each other.

Traffic flow when two private clouds communicate with each other.

The communication between two private clouds in the same region within the VMware Engine service is through direct connectivity. You can deploy multiple private clouds in the same region, so that communication between those private clouds happens locally.

If private clouds are in different regions, then connectivity goes through the service producer VPC network, which is managed and owned by Google.

Customers start with a private cloud, and they can expand or increase nodes on demand (as well as shrink) and place the new nodes either in an existing vSphere cluster or create them in a new cluster (all within the same private cloud).

Service overview: private cloud to VPC

This section reviews the connectivity between your VPC network and the private cloud. Your VPC network uses the private services access model to peer with the service producer VPC network, and then extends the connectivity to the VMware Engine region. Global routing is enabled on the service producer VPC network, and any networks that you create in VMware Engine service portal are advertised automatically by the tier-0 router in NSX-T. Make sure that the peering connection has the import and export flag enabled to exchange routes and have connectivity between VMware Engine and your VPC.

The following diagram shows the traffic flow in this case.

Private cloud to VPC: traffic flow.

Service overview: private cloud to Shared VPC

When you use a Shared VPC network, the connectivity is similar to the preceding example of connecting a private cloud to a VPC network. The difference is that when you connect a private cloud to Shared VPC network, the workloads live in the service project and they use the IP address space in the host project Shared VPC network. Because of this, you have to configure private services access in the host project where the Shared VPC network and Interconnect or VPN are configured.

For example, if you want to have the private cloud, IAM, and billing in a service project, make sure that the private services access connection is established in the host project Shared VPC network.

Service overview: private cloud to on-premises

In the case of private cloud to on-premises, the customer has a VPC network in their project and Organization and the connectivity is between the private cloud and the on-premises data center.

As mentioned previously, when the customer is setting up VMware Engine, they need to allocate a subnet (and ideally also the subnets for the VMware Engine Service to avoid future conflicts) for the private services access connection. While allocating that subnet, they create a service producer VPC network that connects them to the VMware Engine region where the private cloud exists.

After the VPC Peering connection is created and provisioned, the service producer VPC network is connected to the customer's VPC network. This allows all subnets and IP addresses within the customer VPC network to be reachable from the private cloud. Make sure to enable the import and export of routes in the VPC Peering connection when you configure private services access.

The following diagram shows the end-to-end connectivity between a private cloud in VMware Engine and an on-premises data center.

End-to-end connectivity between a private cloud in VMware Engine and an on-premises data center.

Private Google Access: a closer look

As mentioned earlier, you can use Private Google Access to connect to Google APIs and services without giving your Google Cloud resources external IP addresses, so the VMware Engine service can take advantage of this and use the internet gateway to reach Google APIs.

VM instances that only have internal IP addresses can take advantage of Private Google Access to reach the external IP addresses of Google APIs and services. When Private Google Access is configured, traffic is routed to the internet gateway and then to the service requested.

To enable Private Google Access for VMware Engine, you configure your DNS server in your VMware Engine environment to use the private virtual IP address. For more information, see Private Google Access for on-premises hosts and Configuring Private Google Access for on-premises hosts. The domain private.googleapis.com uses 199.36.153.8/30.

To manage DNS resolution, you can use the DNS service provided in NSX-T to forward requests to a specified DNS server. The DNS server can be a VM in VMware Engine, Cloud DNS, or an on-premises DNS server. Depending on how you're accessing the internet as described in an earlier section, one of those options will be used.

Private Google Access supports most Google APIs and services, like Cloud Storage, Cloud Spanner, and BigQuery. Currently, Private Google Access doesn't support App Engine, Memorystore, Filestore, or Cloud SQL.

Following are examples of how you can use VMware Engine with Google Cloud services:

  • Access Cloud Storage from VMware VMs to export data or as an extended storage target.
  • Monitor all of your public, private, and hybrid applications using Cloud Monitoring.
  • Import data from databases into BigQuery for analytics.
  • Deploy Anthos for high performance and private, containerized application deployments.

Private Google Access configuration depends on the internet access enabled for VMware Engine, and it is represented in the following diagram. It can be provided through either 1) the VMware Engine service internet gateway, or 2) the service producer VPC network internet gateway.

Private Google Access configuration.

If internet access is provided through VMware Engine and enabled for the region, Private Google Access and internet access will use the internet gateway.

If you provide internet access on-premises or through your VPC network, then disable the VMware Engine public IP service and configure VPC Service Controls in the peering connection. This removes the default route 0.0.0.0/0 with next-hop internet gateway from the tenant VPC network in the Google Organization. In this case, the default route from on-premises or your VPC network is accepted, and a route is added to the restricted virtual IP address 199.36.153.4/30 that points to the internet gateway in the tenant VPC network.

Option 1: Private Google Access with internet access provided by VMware Engine

If you provide internet access through VMware Engine for a region, Private Google Access and internet uses the internet gateway and performs PAT. In this case, you don't need any additional configuration except for DNS resolution for Private Google Access.

Option 2: Private Google Access with internet access provided by on-premises or customer VPC

If you provide internet access by on-premises or through your VPC network, then you need to configure the VMware Engine service to route packets to Google APIs through the internet gateway in the Organization's tenant VPC network. You need to configure the VMware Engine service to route packets for other traffic through the customer VPC network to reach on-premises through VPN or Interconnect or through their customer VPC network. For all traffic, internet access and public IP services should be disabled for the VMware Engine region and a default 0.0.0.0/0 route should be injected through on-premises.

If you provide internet access on-premises or through your VPC network, remove any allocated public IP addresses and point-to-site VPN gateways before you disable public IP service. Make sure that the route is visible in the customer VPC network and that the route is exported to the tenant VPC network.

Also, you need to enable VPC Service Controls on the VPC peering connection between your VPC and VMware Engine.

Finally, access to Google APIs will need to use the restricted virtual IP address, so DNS must be configured accordingly with the required CNAME and A records, and access to Google APIs will be through the Google Organization, not through the customer VPC network.

Private cloud to Managed Partner Services

Managed Partner Services (MPS) lets you provide mission-critical software-as-a-service (SaaS) offerings to Google Cloud customers from hardware and software that you host in an MPS colocation facility.

For traffic flow between a private cloud and MPS, VPC peering is not transitive. In this case, you need to set up a VPN between NSX-T tier-0 in VMware Engine and Cloud VPN in your VPC network. Routes are exchanged using Border Gateway Protocol (BGP) and the same private access services that use the VPC Peering model apply to connect to MPS. When you use Cloud VPN and BGP, make sure to configure custom route advertisements so that those routes are advertised to NSX-T tier-0 and there is end-to-end connectivity. An example of this approach is NetApp Cloud Volumes Service for Google Cloud, which uses private services access to create a high-throughput and low-latency data-path connection. However, private cloud to managed service providers doesn't work with data stored for VMware workloads.

The following diagram shows end-to-end connectivity between a private cloud in VMware Engine and MPS.

End-to-end connectivity between a private cloud in VMware Engine and MPS.

If you need to use an MPS and you have disabled internet access from VMware Engine, and if your default route comes from the service producer VPC, you can set up VPN connection RFC 1918 IP addresses. The VPN connection works between NSX-T and a virtual appliance in the customer VPC network, such as a third-party virtual network appliance from Cloud Marketplace.

Additional resources: VMware

For more information about the VMware stack, see VMware component versions and the official NSX 3.0 Design Guide.