This article is the third part of a multi-part series that discusses hybrid and multi-cloud deployments, architecture patterns, and network topologies. This part explores common network topologies that you can use for hybrid and multi-cloud setups. The article describes which scenarios and architectural patterns these topologies are best suited for, and provides best practices for implementing them by using Google Cloud.
The series consists of these parts:
- Hybrid and multi-cloud patterns and practices
- Hybrid and multi-cloud architecture patterns
- Hybrid and multi-cloud network topologies (this article)
Connecting private computing environments to Google Cloud in a secure and reliable manner is key to any successful hybrid or multi-cloud deployment. The network topology that you choose for a hybrid and multi-cloud setup needs to meet the unique requirements of your enterprise workloads and suit the architecture patterns that you intend to apply. Although each topology might need tailoring, there are common topologies that can be used as a blueprint.
Mirrored
The idea of the mirrored topology is to have the cloud computing environment and private computing environment mirror each other. This topology applies primarily to setups that follow the environment hybrid pattern, where you run development and testing workloads in one environment while you run staging and production workloads in the other.
Testing and production workloads are not supposed to communicate directly with one another. However, it should be possible to manage and deploy both groups of workloads in a consistent manner. Accordingly, you connect the two computing environments in a way that meets the following requirements:
- Continuous integration/continuous deployment (CI/CD) and administrative systems can deploy and manage workloads across computing environments.
- Monitoring and other administrative tooling works across computing environments.
- Workloads cannot communicate across computing environments.
Reference architecture
The following diagram shows a reference architecture that matches these requirements.
- On Google Cloud, you use two separate virtual private clouds (VPCs)—one Shared VPC for development and testing workloads, and an additional VPC for all CI/CD and administrative tooling. The two VPCs are peered, allowing cross-VPC communication that uses internal IP addresses. The peering allows CI/CD and administrative systems to deploy and manage development and testing workloads.
- Additionally, you connect the CI/CD VPC to the network running the production workloads in the private computing environment. You establish this connection by using either Cloud Interconnect or Cloud VPN. This connection allows you to deploy and manage production workloads.
- All environments share a common, overlap-free RFC 1918 IP address space.
- Optionally, you can set up firewall rules to ensure that production workloads and development and testing workloads cannot communicate with one another.
Variations
For a variation of this topology, you can use separate VPCs for development and different testing stages, all peered with the CI/CD VPC.
When you run VM-based workloads, it might also be beneficial to run CI agents in each environment that take over the job of conducting deployments. Depending on the capabilities of your CI system, using agents might allow you to further lock down communication between environments and to enforce tighter control over which systems are allowed to communicate with one another.
The following diagram shows what this variation might look like.
When you exclusively run Kubernetes workloads, you might not need to peer the two VPCs. Because the Google Kubernetes Engine (GKE) control plane is public, you can use the master authorized networks feature and RBAC to ensure that only your CI systems are permitted to perform deployments. The following diagram shows this topology.
Best practices
- Ensure that any CI/CD systems required for deploying or reconfiguring production deployments are deployed in a highly available fashion. Additionally, consider using redundant virtual private network (VPN) or interconnect links to increase availability.
- Configure VM instances in the development and testing VPC to have public IP addresses so that those instances can access the internet directly. Otherwise, deploy Cloud NAT in the same VPC to handle egress traffic.
- To use public networks, use Private Google Access to avoid communication between VPC workloads and Google APIs.
- Also consider the general best practices for hybrid and multi-cloud networking topologies.
Meshed
The idea of the meshed topology is to establish a flat network that spans multiple computing environments in which all systems can communicate with one another. This topology applies primarily to tiered, partitioned, or bursting setups, and requires that you connect computing environments in a way that meets the following requirements:
Workloads can communicate with one another across environment boundaries over UDP or TCP by using private RFC 1918 IP addresses.
You can use firewall rules to restrict traffic flows in a fine-grained fashion, both between and within computing environments.
Reference architecture
The following diagram shows a reference architecture that satisfies these requirements.
On the Google Cloud side, you deploy workloads into a single shared VPC.
You connect the VPC to the network in the private computing environment by using either Cloud Interconnect or Cloud VPN. This setup ensures that communication between environments can be conducted using private IP addresses.
You use Cloud Router to dynamically exchange routes between environments.
All environments share a common, overlap-free RFC 1918 IP address space.
Variations
You can implement additional deep packet inspection or other advanced firewalling mechanisms that exceed the capabilities of Google Cloud firewall rules. To implement these mechanisms, you can extend the topology to pass all cross-environment traffic through a firewall appliance, as shown in the following diagram.
You establish the VPN or interconnect link between a dedicated transit VPC and the VLAN in the private computing environment.
You establish the connection between the transit VPC and the application VPC by using VMs that are running the firewall appliance. You configure these VMs for IP forwarding. The VMs use multiple networking interfaces: one connected to the transit VPC, and one to the application VPC.
The firewall appliance can also serve as a NAT gateway for VM instances that are deployed in the application VPC. This gateway allows these instances to access the internet without needing public IP addresses.
Best practices
If you intend to enforce stricter isolation between the cloud and private computing environments, consider using the gated topology instead.
When using Kubernetes within the private computing environment, use Open Service Broker to provision and access Google platform services and resources in a unified way.
Also consider the general best practices for hybrid and multi-cloud networking topologies.
Gated egress
The idea of the gated egress topology is to expose selected APIs from the private computing environment to workloads that are deployed in Google Cloud without exposing them to the public internet. You can facilitate this limited exposure through an API gateway that serves as a facade for existing workloads. You deploy the gateway in a perimeter network (DMZ) while deploying workloads in a dedicated, more highly secured network within the private computing environment.
The gated egress topology applies primarily to tiered setups and requires that you connect computing environments in a way that meets the following requirements:
Workloads that you deploy in Google Cloud can communicate with the API gateway by using private IP addresses. Other systems in the private computing environment cannot be reached from within Google Cloud.
Communication from the private computing environment to any workloads deployed in Google Cloud is not allowed.
Reference architecture
The following diagram shows a reference architecture that satisfies these requirements:
On the Google Cloud side, you deploy workloads into a Shared VPC.
Using either Cloud Interconnect or Cloud VPN, you connect the VPC to a perimeter network in the private computing environment, allowing calls to the API gateway.
Using firewall rules, you disallow incoming connections to the VPC.
Optionally, you use Cloud Router to dynamically exchange routes between environments.
All environments share a common, overlap-free RFC 1918 IP address space.
Variations
To access the internet, VM instances that you deploy in the application VPC must have external IP addresses. To avoid having to set these external addresses, you can deploy Cloud NAT into the same VPC network, as the following diagram shows.
Best practices
Consider using Apigee for Private Cloud as the API gateway solution.
Also consider the general best practices for hybrid and multi-cloud networking topologies.
Gated ingress
The idea of the gated ingress topology is to expose selected APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. This topology is the counterpart to the gated egress scenario and is well suited for edge hybrid scenarios.
You expose selected APIs through an API gateway that you make accessible to the private computing environment, as follows:
Workloads that you deploy in the private computing environment are able to communicate with the API gateway by using private IP addresses. Other systems deployed in Google Cloud cannot be reached.
Communication from Google Cloud to the private computing environment is not allowed.
Reference architecture
The following diagram shows a reference architecture that meets these requirements.
On the Google Cloud side, you deploy workloads into an application VPC.
You establish a Cloud Interconnect or Cloud VPN connection between a dedicated transit VPC and the private computing environment.
You establish the connection between the transit VPC and the application VPC by using VMs that are running the API gateway. These VMs use two networking interfaces: one connected to the transit VPC, and one to the application VPC. To balance traffic to multiple API gateway instances, you configure an internal load balancer in the transit VPC.
You deploy Cloud NAT in the application VPC to allow workloads to access the internet. This gateway avoids having to equip VM instances with external IP addresses, which you don't want in systems that are deployed behind an API gateway.
Optionally, you can use Cloud Router to dynamically exchange routes between environments.
All environments share a common, overlap-free RFC 1918 IP address space.
Best practices
Consider using Apigee for Private Cloud as API gateway solution.
Also consider the general best practices for hybrid and multi-cloud networking topologies.
Gated ingress and egress
Combining gated ingress and egress allows bidirectional usage of selected APIs between workloads that are running in Google Cloud and in a private computing environment. On both sides, you use API gateways to expose selected APIs and to optionally authenticate, authorize, and audit calls.
The API communication works as follows:
Workloads that you deploy in Google Cloud can communicate with the API gateway by using private IP addresses. Other systems deployed in the private computing environment cannot be reached.
Conversely, workloads that you deploy in the private computing environment can communicate with the Google Cloud-side API gateway by using private IP addresses. Other systems deployed in Google Cloud cannot be reached.
Reference architecture
The following diagram shows a reference architecture for the gated ingress and egress topology.
On the Google Cloud side, you deploy workloads to a Shared VPC and do not expose them to the internet.
You establish a Cloud Interconnect or Cloud VPN connection between a dedicated transit VPC and the private computing environment.
You establish the connection between the transit VPC and the application VPC by using VMs that are running the API gateway. These VMs use two networking interfaces: one connected to the transit VPC, and one to the application VPC. To balance traffic to multiple API gateway instances, you configure an internal load balancer in the transit VPC.
You also use Cloud NAT. Cloud NAT allows workloads to access the internet and to communicate with the API gateway that is running in the private computing environment.
Optionally, you can use Cloud Router to dynamically exchange routes between environments.
All environments share a common, overlap-free RFC 1918 IP address space.
Best practices
- Consider using Apigee for Private Cloud as API gateway solution.
- Also consider the general best practices for hybrid and multi-cloud networking topologies.
Handover
The idea of the handover topology is to use Google Cloud-provided storage services to connect a private computing environment to projects in Google Cloud. This topology applies primarily to setups that follow the analytics pattern, where:
Workloads that are running in a private computing environment upload data to shared storage locations. Depending on use cases, uploads might happen in bulk or in small messages.
Google Cloud-hosted workloads then consume data from these locations and process it in a streaming or batch fashion.
Reference architecture
The following diagram shows a reference architecture for the handover topology.
On the Google Cloud side, you deploy workloads into an application VPC. These workloads can include data processing, analytics, and analytics-related frontend applications.
To securely expose frontend applications to corporate users, you can use Identity-Aware Proxy.
You use a set of Cloud Storage buckets or Pub/Sub queues to upload data from the private computing environment and to make it available for further processing by workloads deployed in Google Cloud. Using IAM policies, you can restrict access to trusted workloads.
Because there is no private network connectivity between environments, RFC 1918 IP address spaces are allowed to overlap between environments.
Best practices
Lock down access to Cloud Storage buckets and Pub/Sub topics.
To reduce latency and to avoid passing data over the public internet, consider using Direct Peering or Carrier Peering.
Use VPC Service Controls to restrict access to the Cloud Storage or Pub/Sub locations to specific IP address ranges.
Equip VM instances in the VPC with public IP addresses so that they can access the internet directly. Otherwise, deploy Cloud NAT in the same VPC to handle egress traffic.
Also consider the general best practices for hybrid and multi-cloud networking topologies.
Best practices for hybrid and multi-cloud networking topologies
On the Google Cloud side, take advantage of Shared VPCs. A Shared VPC is a VPC that can be used across multiple Google Cloud projects, avoiding the need to maintain many individual VPCs. Shared VPCs also allow you to manage peering configuration, subnets, firewall rules, and permissions in a centralized fashion.
When managing firewall rules, prefer using service account–based filtering over network tag–based filtering.
Avoid using VPCs to isolate individual workloads from one another. Instead, prefer using subnets and firewall rules. When using GKE, you can complement this approach by using network policies.
While Cloud VPN ensures that traffic between environments is encrypted, Cloud Interconnect does not. To help secure communication between workloads, consider using Transport Layer Security (TLS).
Follow our best practices for creating high-throughput VPNs.
Allow enough address space from your existing RFC 1918 IP address space to accommodate all cloud-hosted systems.
When using GKE, consider using private clusters and be mindful of the network size requirements.
Use Private Google Access to allow VM instances running in Google Cloud that do not have an assigned external IP address to access Google services.
What's next
- Learn more about the common architecture patterns that you can realize by using the network topologies discussed in this article.
- Learn how to approach hybrid and multi-cloud and how to choose suitable workloads.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.