VMware Engine network security using centralized appliances

Last reviewed 2023-07-26 UTC

As part of your organization's defense-in-depth strategy, you might have security policies that require the use of centralized network appliances for in-line detection and blocking of suspicious network activity. This document helps you to design the following advanced network-protection features for Google Cloud VMware Engine workloads:

  • Mitigation of distributed denial-of-service (DDoS) attacks
  • SSL offloading
  • Next-generation firewalls (NGFW)
  • Intrusion Prevention System (IPS) and Intrusion Detection System (IDS)
  • Deep packet inspection (DPI)

The architectures in this document use Cloud Load Balancing and network appliances from Google Cloud Marketplace. Cloud Marketplace offers production-ready, vendor-supported network appliances from Google Cloud security partners for your enterprise IT needs.

The guidance in this document is intended for security architects and network administrators who design, provision, and manage network connectivity for VMware Engine workloads. The document assumes that you're familiar with Virtual Private Cloud (VPC), VMware vSphere, VMware NSX, network address translation (NAT), and Cloud Load Balancing.

Architecture

The following diagram shows an architecture for network connectivity to VMware Engine workloads from on-premises networks and from the internet. Later in this document, this architecture is extended to meet the requirements of specific use cases.

Basic architecture for network connectivity to
           VMware Engine workloads.
Figure 1. Basic architecture for network connectivity to VMware Engine workloads.

Figure 1 shows the following main components of the architecture:

  1. VMware Engine private cloud: an isolated VMware stack that consists of virtual machines (VMs), storage, networking infrastructure, and a VMware vCenter Server. VMware NSX-T provides networking and security features such as microsegmentation and firewall policies. The VMware Engine VMs use IP addresses from network segments that you create in your private cloud.
  2. Public IP address service: provides external IP addresses to the VMware Engine VMs to enable ingress access from the internet. The internet gateway provides egress access by default for the VMware Engine VMs.
  3. VMware Engine tenant VPC network: a dedicated, Google-managed VPC network that's used with every VMware Engine private cloud to enable communication with Google Cloud services.
  4. Customer VPC networks:

    • Customer VPC network 1 (external): a VPC network that hosts the public-facing interface of your network appliance and load balancer.
    • Customer VPC network 2 (internal): a VPC network that hosts the internal interface of the network appliance and is peered with the VMware Engine tenant VPC network by using the private services access model.
  5. Private services access: a private access model that uses VPC Network Peering to enable connectivity between Google-managed services and your VPC networks.

  6. Network appliances: Networking software that you choose from Cloud Marketplace and deploy on Compute Engine instances. For more information about deploying third-party network appliances on Google Cloud, see Centralized network appliances on Google Cloud.

  7. Cloud Load Balancing: a Google-managed service that you can use to manage traffic to highly available distributed workloads in Google Cloud. You can choose a load balancer type that suits your traffic protocol and access requirements. The architectures in this document don't use the built-in NSX-T load balancers.

Configuration notes

The following diagram shows the resources that are required to provide network connectivity for VMware Engine workloads:

Resources required for network connectivity to
            VMware Engine workloads.
Figure 2. Resources required for network connectivity to VMware Engine workloads.

Figure 2 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions.

  1. Create the external and internal VPC networks and subnets by following the instructions in Creating a custom mode VPC network.

    • For each subnet, choose an IP address range that's unique across the VPC networks.
    • The Management VPC network that's shown in the architecture diagram is optional. If necessary, you can use it to host management NIC interfaces for your network appliances.
  2. Deploy the required network appliances from Cloud Marketplace.

    • For high availability of the network appliances, deploy each appliance in a pair of VMs distributed across two zones.

      You can deploy the network appliances in instance groups. The instance groups can be managed instance groups (MIGs) or unmanaged instance groups, depending on your management or vendor-support requirements.

    • Provision the network interfaces as follows:

      • nic0 in the external VPC network to route traffic to the public source.
      • nic1 for management operations, if the appliance vendor requires it.
      • nic2 in the internal VPC network for internal communication with the VMware Engine resources.

      Deploying the network interfaces in separate VPC networks helps you to ensure security-zone segregation at the interface level for public and on-premises connections.

  3. Set up VMware Engine:

  4. Use private services access to set up VPC Network Peering to connect the internal VPC network to the VPC network that VMware Engine manages.

  5. If you need hybrid connectivity to your on-premises network, use Cloud VPN or Cloud Interconnect.

You can extend the architecture in figure 2 for the following use cases:

Use case Products and services used
NGFW for public-facing VMware Engine workloads
  • Network appliances from Cloud Marketplace
  • External passthrough Network Load Balancers
NGFW, DDoS mitigation, SSL offloading, and Content Delivery Network (CDN) for public-facing VMware Engine workloads
  • Network appliances from Cloud Marketplace
  • External Application Load Balancers
NGFW for private communication between VMware Engine workloads and on-premises data centers or other cloud providers
  • Network appliances from Cloud Marketplace
  • Internal passthrough Network Load Balancers
Centralized egress points to the internet for VMware Engine workloads
  • Network appliances from Cloud Marketplace
  • Internal passthrough Network Load Balancers

The following sections describe these use cases and provide an overview of the configuration tasks to implement the use cases.

NGFW for public-facing workloads

This use case has the following requirements:

  • A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L4 load balancer as the common frontend.
  • Protection for public VMware Engine workloads by using an IPS/IDS, NGFW, DPI, or NAT solution.
  • More public IP addresses than are supported by the public IP address service of VMware Engine.

The following diagram shows the resources that are required to provision an NGFW for your public-facing VMware Engine workloads:

Resources required to provision an NGFW for public-facing
            VMware Engine workloads.
Figure 3. Resources required to provision an NGFW for public-facing VMware Engine workloads.

Figure 3 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions.

  1. Provision an external passthrough Network Load Balancer in the external VPC network as the public-facing ingress entry point for VMware Engine workloads.

    • Create multiple forwarding rules to support multiple VMware Engine workloads.
    • Configure each forwarding rule with a unique IP address and a TCP or UDP port number.
    • Configure the network appliances as backends for the load balancer.
  2. Configure the network appliances to perform destination-NAT (DNAT) for the forwarding rule's public IP address to the internal IP addresses of the VMs that host the public-facing applications in VMware Engine.

    • The network appliances must perform source-NAT (SNAT) for the traffic from the nic2 interface to ensure a symmetric returned path.
    • The network appliances must also route traffic destined for VMware Engine networks through the nic2 interface to the subnet's gateway (the first IP address of the subnet).
    • For the health checks to pass, the network appliances must use secondary or loopback interfaces to respond to the forwarding rules' IP addresses.
  3. Set up the internal VPC network's route table to forward VMware Engine traffic to VPC Network Peering as a next hop.

In this configuration, the VMware Engine VMs use the internet gateway service of VMware Engine for egress to internet resources. However, ingress is managed by the network appliances for the public IP addresses that are mapped to the VMs.

NGFW, DDoS mitigation, SSL offloading, and CDN

This use case has the following requirements:

  • A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L7 load balancer as the common frontend, and URL mapping to route traffic to the appropriate backend.
  • Protection for public VMware Engine workloads by using an IPS/IDS, NGFW, DPI, or NAT solution.
  • L3—L7 DDoS mitigation for public VMware Engine workloads by using Google Cloud Armor.
  • SSL termination using Google-managed SSL certificates, or SSL policies to control the SSL versions and ciphers that are used for HTTPS or SSL connections to public-facing VMware Engine workloads.
  • Accelerated network delivery for VMware Engine workloads by using Cloud CDN to serve content from locations that are close to the users.

The following diagram shows the resources that are required to provision NGFW capability, DDoS mitigation, SSL offloading, and CDN for your public-facing VMware Engine workloads:

Resources required to provision an NGFW, DDoS mitigation, SSL
            offloading, and CDN for public-facing VMware Engine
            workloads.
Figure 4. Resources required to provision an NGFW, DDoS mitigation, SSL offloading, and CDN for public-facing VMware Engine workloads.

Figure 4 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions.

  1. Provision a global external Application Load Balancer in the external VPC network as the public-facing ingress entry point for VMware Engine workloads.

    • Create multiple forwarding rules to support multiple VMware Engine workloads.
    • Configure each forwarding rule with a unique public IP address, and set it up to listen to HTTP(S) traffic.
    • Configure the network appliances as backends for the load balancer.

    In addition, you can do the following:

    • To protect the network appliances, set up Google Cloud Armor security policies on the load balancer.
    • To support routing, health-check, and anycast IP address for the network appliances that act as the CDN backends, set up Cloud CDN for the MIGs that host the network appliances.
    • To route requests to different backends, set up URL mapping on the load balancer. For example, route requests to /api to Compute Engine VMs, requests to /images to a Cloud Storage bucket, and requests to /app through the network appliances to your VMware Engine VMs.
  2. Configure each network appliance to perform destination-NAT (DNAT) for the internal IP address of its nic0 interface to the internal IP addresses of the VMs that host the public-facing applications in VMware Engine.

    • The network appliances must perform SNAT for the source traffic from the nic2 interface (internal IP address) to ensure a symmetric returned path.
    • In addition, the network appliances must route traffic destined for VMware Engine networks through the nic2 interface to the subnet gateway (the first IP address of the subnet).

    The DNAT step is necessary because the load balancer is a proxy-based service that's implemented on a Google Front End (GFE) service. Depending on the location of your clients, multiple GFEs can initiate HTTP(S) connections to the internal IP addresses of the backend network appliances. The packets from the GFEs have source IP addresses from the same range that's used for the health-check probes (35.191.0.0/16 and 130.211.0.0/22), not the original client IP addresses. The load balancer appends the client IP addresses by using the X-Forwarded-For header.

    For the health checks to pass, configure the network appliances to respond to the forwarding rule's IP address by using secondary or loopback interfaces.

  3. Set up the internal VPC network's route table to forward VMware Engine traffic to VPC Network Peering.

    In this configuration, the VMware Engine VMs use the internet gateway service of VMware Engine for egress to the internet. However, ingress is managed by the network appliances for the public IP addresses of the VMs.

NGFW for private connectivity

This use case has the following requirements:

  • A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L4 load balancer as the common frontend.
  • Protection for your private VMware Engine workloads by using an IPS/IDS, NGFWs, DPI, or NAT solution.
  • Cloud Interconnect or Cloud VPN for connectivity with the on-premises network.

The following diagram shows the resources that are required to provision an NGFW for private connectivity between your VMware Engine workloads and on-premises networks or other cloud providers:

Resources required to provision an NGFW for private connectivity.
Figure 5. Resources required to provision an NGFW for private connectivity to VMware Engine workloads.

Figure 5 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions.

  1. Provision an internal passthrough Network Load Balancer in the external VPC network, with a single forwarding rule to listen to all the traffic. Configure the network appliances as backends for the load balancer.

  2. Set up the external VPC network's route table to point to the forwarding rule as a next hop for traffic destined to VMware Engine networks.

  3. Configure the network appliances as follows:

    • Route traffic destined to VMware Engine networks through the nic2 interface to the subnet gateway (the first IP address of the subnet).
    • For the health checks to pass, configure the network appliances to respond to the forwarding rule's IP address by using secondary or loopback interfaces.
    • For the health checks to pass for the internal load balancers, configure multiple virtual routing domains to ensure proper routing. This step is necessary to allow the nic2 interface to return health-check traffic that is sourced from the public ranges (35.191.0.0/16 and 130.211.0.0/22), while the default route of the network appliances points to the nic0 interface. For more information about IP ranges for load-balancer health checks, see Probe IP ranges and firewall rules.
  4. Set up the route table of the internal VPC network to forward VMware Engine traffic to VPC Network Peering as a next hop.

  5. For returned traffic or for traffic that's initiated from VMware Engine to remote networks, configure the internal passthrough Network Load Balancer as the next hop that's advertised over VPC Network Peering to the private services access VPC network.

Centralized egress to the internet

This use case has the following requirements:

  • Centralized URL filtering, logging, and traffic enforcement for internet egress.
  • Customized protection for VMware Engine workloads by using network appliances from Cloud Marketplace.

The following diagram shows the resources required to provision centralized egress points from VMware Engine workloads to the internet:

Resources required to provision centralized egress to the internet.
Figure 6. Resources required to provision centralized egress to the internet for VMware Engine workloads.

Figure 6 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions.

  1. Provision an internal passthrough Network Load Balancer in the internal VPC network as the egress entry point for VMware Engine workloads.

  2. Configure the network appliances to SNAT the traffic from their public IP addresses (nic0). For the health checks to pass, the network appliances must respond to the forwarding rule's IP address by using secondary or loopback interfaces.

  3. Configure the internal VPC network to advertise a default route over VPC Network Peering to the private services access VPC network, with the internal load balancer's forwarding rule as the next hop.

  4. To allow traffic to egress through the network appliances instead of the internet gateway, use the same process as you would to enable the routing of internet traffic through an on-premises connection.

What's next