Hybrid and Multi-Cloud Network Topologies

This article is the third part of a multi-part series that discusses hybrid and multi-cloud deployments, architecture patterns, and network topologies. This part explores common network topologies that you can use for hybrid and multi-cloud setups. The article describes which scenarios and architectural patterns these topologies are best suited for, and provides best practices for implementing them by using Google Cloud Platform (GCP).

The series consists of these parts:

Connecting private computing environments to GCP in a secure and reliable manner is key to any successful hybrid or multi-cloud deployment. The network topology that you choose for a hybrid and multi-cloud setup needs to meet the unique requirements of your enterprise workloads and suit the architecture patterns that you intend to apply. Although each topology might need tailoring, there are common topologies that can be used as a blueprint.

Mirrored

The idea of the mirrored topology is to have the cloud computing environment and private computing environment mirror each other. This topology applies primarily to setups that follow the environment hybrid pattern, where you run development and testing workloads in one environment while you run staging and production workloads in the other.

Testing and production workloads are not supposed to communicate directly with one another. However, it should be possible to manage and deploy both groups of workloads in a consistent manner. Accordingly, you connect the two computing environments in a way that meets the following requirements:

  • Continuous integration/continuous deployment (CI/CD) and administrative systems can deploy and manage workloads across computing environments.
  • Monitoring and other administrative tooling works across computing environments.
  • Workloads cannot communicate across computing environments.

Reference architecture

The following diagram shows a reference architecture that matches these requirements.

reference architecture that meets the preceding requirements

  • On GCP, you use two separate virtual private clouds (VPCs)—one Shared VPC for development and testing workloads, and an additional VPC for all CI/CD and administrative tooling. The two VPCs are peered, allowing cross-VPC communication that uses internal IP addresses. The peering allows CI/CD and administrative systems to deploy and manage development and testing workloads.
  • Additionally, you connect the CI/CD VPC to the network running the production workloads in the private computing environment. You establish this connection by using either Cloud Interconnect or Cloud VPN. This connection allows you to deploy and manage production workloads.
  • All environments share a common, overlap-free RFC 1918 IP address space.
  • Optionally, you can set up firewall rules to ensure that production workloads and development and testing workloads cannot communicate with one another.

Variations

For a variation of this topology, you can use separate VPCs for development and different testing stages, all peered with the CI/CD VPC.

When you run VM-based workloads, it might also be beneficial to run CI agents in each environment that take over the job of conducting deployments. Depending on the capabilities of your CI system, using agents might allow you to further lock down communication between environments and to enforce tighter control over which systems are allowed to communicate with one another.

The following diagram shows what this variation might look like.

Mirrored topology with agents

When you exclusively run Kubernetes workloads, you might not need to peer the two VPCs. Because the Google Kubernetes Engine (GKE) control plane is public, you can use the master authorized networks feature and RBAC to ensure that only your CI systems are permitted to perform deployments. The following diagram shows this topology.

Mirrored topology

Best practices

  • Ensure that any CI/CD systems required for deploying or reconfiguring production deployments are deployed in a highly available fashion. Additionally, consider using redundant virtual private network (VPN) or interconnect links to increase availability.
  • Configure VM instances in the development and testing VPC to have public IP addresses so that those instances can access the internet directly. Otherwise, deploy NAT gateways in the same VPC to handle egress traffic.
  • To use public networks, use Private Google Access to avoid communication between VPC workloads and Google APIs.
  • Also consider the general best practices for hybrid and multi-cloud networking topologies.

Meshed

The idea of the meshed topology is to establish a flat network that spans multiple computing environments in which all systems can communicate with one another. This topology applies primarily to tiered, partitioned, or bursting setups, and requires that you connect computing environments in a way that meets the following requirements:

  • Workloads can communicate with one another across environment boundaries over UDP or TCP by using private RFC 1918 IP addresses.

  • You can use firewall rules to restrict traffic flows in a fine-grained fashion, both between and within computing environments.

Reference architecture

The following diagram shows a reference architecture that satisfies these requirements.

meshed reference architecture

  • On the GCP side, you deploy workloads into a single shared VPC.

  • You connect the VPC to the network in the private computing environment by using either Cloud Interconnect or Cloud VPN. This setup ensures that communication between environments can be conducted using private IP addresses.

  • You use Cloud Router to dynamically exchange routes between environments.

  • All environments share a common, overlap-free RFC 1918 IP address space.

Variations

You can implement additional deep packet inspection or other advanced firewalling mechanisms that exceed the capabilities of GCP firewall rules. To implement these mechanisms, you can extend the topology to pass all cross-environment traffic through a firewall appliance, as shown in the following diagram.

extended meshed topology

  • You establish the VPN or interconnect link between a dedicated transit VPC and the VLAN in the private computing environment.

  • You establish the connection between the transit VPC and the application VPC by using VMs that are running the firewall appliance. You configure these VMs for IP forwarding. The VMs use multiple networking interfaces: one connected to the transit VPC, and one to the application VPC.

  • The firewall appliance can also serve as a NAT gateway for VM instances that are deployed in the application VPC. This gateway allows these instances to access the internet without needing public IP addresses.

Best practices

Gated egress

The idea of the gated egress topology is to expose selected APIs from the private computing environment to workloads that are deployed in GCP without exposing them to the public internet. You can facilitate this limited exposure through an API gateway that serves as a facade for existing workloads. You deploy the gateway in a DMZ while deploying actual workloads in a dedicated, more highly secured network within the private computing environment.

The gated egress topology applies primarily to tiered setups and requires that you connect computing environments in a way that meets the following requirements:

  • Workloads that you deploy in GCP can communicate with the API gateway by using private IP addresses. Other systems in the private computing environment cannot be reached from within GCP.

  • Communication from the private computing environment to any workloads deployed in GCP is not allowed.

Reference architecture

The following diagram shows a reference architecture that satisfies these requirements:

gated egress topology

  • On the GCP side, you deploy workloads into a Shared VPC.

  • Using either Cloud Interconnect or Cloud VPN, you connect the VPC to a DMZ in the private computing environment, allowing calls to the API gateway.

  • Using firewall rules, you disallow incoming connections to the VPC.

  • Optionally, you use Cloud Router to dynamically exchange routes between environments.

  • All environments share a common, overlap-free RFC 1918 IP address space.

Variations

To access the internet, VM instances that you deploy in the application VPC must have external IP addresses. To avoid having to set these external addresses, you can deploy NAT gateways into the same VPC, as the following diagram shows.

variations in gated egress topology

Best practices

Gated ingress

The idea of the gated ingress topology is to expose selected APIs of workloads running in GCP to the private computing environment without exposing them to the public internet. This topology is the counterpart to the gated egress scenario and is well suited for edge hybrid scenarios.

You expose selected APIs through an API gateway that you make accessible to the private computing environment, as follows:

  • Workloads that you deploy in the private computing environment are able to communicate with the API gateway by using private IP addresses. Other systems deployed in GCP cannot be reached.

  • Communication from GCP to the private computing environment is not allowed.

Reference architecture

The following diagram shows a reference architecture that meets these requirements.

gated ingress topology

  • On the GCP side, you deploy workloads into an application VPC.

  • You establish a Cloud Interconnect or Cloud VPN connection between a dedicated transit VPC and the private computing environment.

  • You establish the connection between the transit VPC and the application VPC by using VMs that are running the API gateway. These VMs use two networking interfaces: one connected to the transit VPC, and one to the application VPC. To balance traffic to multiple API gateway instances, you configure an internal load balancer in the transit VPC.

  • You deploy a NAT gateway in the application VPC to allow workloads to access the internet. This gateway avoids having to equip VM instances with external IP addresses, which you don't want in systems that are deployed behind an API gateway.

  • Optionally, you can use Cloud Router to dynamically exchange routes between environments.

  • All environments share a common, overlap-free RFC 1918 IP address space.

Best practices

Gated ingress and egress

Combining gated ingress and egress allows bidirectional usage of selected APIs between workloads that are running in GCP and in a private computing environment. On both sides, you use API gateways to expose selected APIs and to optionally authenticate, authorize, and audit calls.

The API communication works as follows:

  • Workloads that you deploy in GCP can communicate with the API gateway by using private IP addresses. Other systems deployed in the private computing environment cannot be reached.

  • Conversely, workloads that you deploy in the private computing environment can communicate with the GCP-side API gateway by using private IP addresses. Other systems deployed in GCP cannot be reached.

Reference architecture

The following diagram shows a reference architecture for the gated ingress and egress topology.

gated ingress and egress topology

  • On the GCP side, you deploy workloads to a Shared VPC and do not expose them to the internet.

  • You establish a Cloud Interconnect or Cloud VPN connection between a dedicated transit VPC and the private computing environment.

  • You establish the connection between the transit VPC and the application VPC by using VMs that are running the API gateway. These VMs use two networking interfaces: one connected to the transit VPC, and one to the application VPC. To balance traffic to multiple API gateway instances, you configure an internal load balancer in the transit VPC.

  • You also use VM instances that are serving as a NAT gateway that connects to both VPCs. These instances allow workloads to access the internet and to communicate with the API gateway that is running in the private computing environment.

  • Optionally, you can use Cloud Router to dynamically exchange routes between environments.

  • All environments share a common, overlap-free RFC 1918 IP address space.

Best practices

Handover

The idea of the handover topology is to use GCP-provided storage services to connect a private computing environment to projects in GCP. This topology applies primarily to setups that follow the analytics pattern, where:

  • Workloads that are running in a private computing environment upload data to shared storage locations. Depending on use cases, uploads might happen in bulk or in small messages.

  • GCP-hosted workloads then consume data from these locations and process it in a streaming or batch fashion.

Reference architecture

The following diagram shows a reference architecture for the handover topology.

reference architecture for the handover topology

  • On the GCP side, you deploy workloads into an application VPC. These workloads can include data processing, analytics, and analytics-related frontend applications.

  • To securely expose frontend applications to corporate users, you can use Cloud Identity-Aware Proxy.

  • You use a set of Cloud Storage buckets or Cloud Pub/Sub queues to upload data from the private computing environment and to make it available for further processing by workloads deployed in GCP. Using IAM policies, you can restrict access to trusted workloads.

  • Because there is no private network connectivity between environments, RFC 1918 IP address spaces are allowed to overlap between environments.

Best practices

Best practices for hybrid and multi-cloud networking topologies

  • On the Google Cloud Platform side, take advantage of Shared VPCs. A Shared VPC is a VPC that can be used across multiple Google Cloud Platform projects, avoiding the need to maintain many individual VPCs. Shared VPCs also allow you to manage peering configuration, subnets, firewall rules, and permissions in a centralized fashion.

  • When managing firewall rules, prefer using service account–based filtering over network tag–based filtering.

  • Avoid using VPCs to isolate individual workloads from one another. Instead, prefer using subnets and firewall rules. When using GKE, you can complement this approach by using network policies.

  • While Cloud VPN ensures that traffic between environments is encrypted, Cloud Interconnect does not. To secure communication between workloads, consider securing using Transport Layer Security (TLS).

  • Follow our best practices for creating high-throughput VPNs.

  • Allow enough address space from your existing RFC 1918 IP address space to accommodate all cloud-hosted systems.

  • When using GKE, consider using private clusters and be mindful of the network size requirements.

  • Use Private Google Access to allow VM instances running in GCP that do not have an assigned external IP address to access Google services.

What's next

Kunde den här sidan hjälpa dig? Berätta:

Skicka feedback om ...