Partitioned multicloud pattern

Last reviewed 2023-12-14 UTC

The partitioned multicloud architecture pattern combines multiple public cloud environments that are operated by different cloud service providers. This architecture provides the flexibility to deploy an application in an optimal computing environment that accounts for the multicloud drivers and considerations discussed in the first part of this series.

The following diagram shows a partitioned multicloud architecture pattern.

Data flow from an application in Google Cloud to an application in a different cloud environment.

This architecture pattern can be built in two different ways. The first approach is based on deploying the application components in different public cloud environments. This approach is also referred to as a composite architecture and is the same approach as the tiered hybrid architecture pattern. Instead of using an on-premises environment with a public cloud, however, it uses at least two cloud environments. In a composite architecture, a single workload or application uses components from more than one cloud. The second approach deploys different applications on different public cloud environments. The following non-exhaustive list describes some of the business drivers for the second approach:

  • To fully integrate applications hosted in disparate cloud environments during a merger and acquisition scenario between two enterprises.
  • To promote flexibility and cater to diverse cloud preferences within your organization. Adopt this approach to encourage organizational units to choose the cloud provider that best suits their specific needs and preferences.
  • To operate in a multi-regional or global-cloud deployment. If an enterprise is required to adhere to data residency regulations in specific regions or countries, then they need to choose from among the available cloud providers in that location if their primary cloud provider does not have a cloud region there.

With the partitioned multicloud architecture pattern, you can optionally maintain the ability to shift workloads as needed from one public cloud environment to another. In that case, the portability of your workloads becomes a key requirement. When you deploy workloads to multiple computing environments, and want to maintain the ability to move workloads between environments, you must abstract away the differences between the environments. By using GKE Enterprise, you can design and build a solution to solve multicloud complexity with consistent governance, operations, and security postures. For more information, see GKE Multi-Cloud.

As previously mentioned, there are some situations where there might be both business and technical reasons to combine Google Cloud with another cloud provider and to partition workloads across those cloud environments. Multicloud solutions offer you the flexibility to migrate, build, and optimize applications portability across multicloud environments while minimizing lock-in, and helping you to meet your regulatory requirements. For example, you might connect Google Cloud with Oracle Cloud Infrastructure (OCI), to build a multicloud solution that harnesses the capabilities of each platform using a private Cloud Interconnect to combine components running in OCI with resources running on Google Cloud. For more information, see Google Cloud and Oracle Cloud Infrastructure – making the most of multicloud. In addition, Cross-Cloud Interconnect facilitates high-bandwidth dedicated connectivity between Google Cloud and other supported cloud service providers, enabling you to architect and build multicloud solutions to handle high inter-cloud traffic volume.

Advantages

While using a multicloud architecture offers several business and technical benefits, as discussed in Drivers, considerations, strategy, and approaches, it's essential to perform a detailed feasibility assessment of each potential benefit. Your assessment should carefully consider any associated direct or indirect challenges or potential roadblocks, and your ability to navigate them effectively. Also, consider that the long-term growth of your applications or services can introduce complexities that might outweigh the initial benefits.

Here are some key advantages of the partitioned multicloud architecture pattern:

  • In scenarios where you might need to minimize committing to a single cloud provider, you can distribute applications across multiple cloud providers. As a result, you could relatively reduce vendor lock-in with the ability to change plans (to some extent) across your cloud providers. Open Cloud helps to bring Google Cloud capabilities, like GKE Enterprise, to different physical locations. By extending Google Cloud capabilities on-premises, in multiple public clouds, and on the edge, it provides flexibility, agility, and drives transformation.

  • For regulatory reasons, you can serve a certain segment of your user base and data from a country where Google Cloud doesn't have a cloud region.

  • The partitioned multicloud architecture pattern can help to reduce latency and improve the overall quality of the user experience in locations where the primary cloud provider does not have a cloud region or a point of presence. This pattern is especially useful when using high-capacity and low latency multicloud connectivity, such as Cross-Cloud Interconnect and CDN Interconnect with a distributed CDN.

  • You can deploy applications across multiple cloud providers in a way that lets you choose among the best services that the other cloud providers offer.

  • The partitioned multicloud architecture pattern can help facilitate and accelerate merger and acquisition scenarios, where the applications and services of the two enterprises might be hosted in different public cloud environments.

Best practices

  • Start by deploying a non-mission-critical workload. This initial deployment in the secondary cloud can then serve as a pattern for future deployments or migrations. However, this approach probably isn't applicable in situations where the specific workload is legally or regulatorily required to reside in a specific cloud region, and the primary cloud provider doesn't have a region in the required territory.
  • Minimize dependencies between systems that are running in different public cloud environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges.
  • To abstract away the differences between environments, consider using containers and Kubernetes where supported by the applications and feasible.
  • Ensure that CI/CD pipelines and tooling for deployment and monitoring are consistent across cloud environments.
  • Select the optimal network architecture pattern that provides the most efficient and effective communication solution for the applications you're using.
  • To meet your availability and performance expectations, design for end-to-end high availability (HA), low latency, and appropriate throughput levels.
  • To protect sensitive information, we recommend encrypting all communications in transit.

    • If encryption is required at the connectivity layer, various options are available, based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect.
  • If you're using multiple CDNs as part of your multicloud partitioned architecture pattern, and you're populating your other CDN with large data files from Google Cloud, consider using CDN Interconnect links between Google Cloud and supported providers to optimize this traffic and, potentially, its cost.

  • Extend your identity management solution between environments so that systems can authenticate securely across environment boundaries.

  • To effectively balance requests across Google Cloud and another cloud platform, you can use Cloud Load Balancing. For more information, see Routing traffic to an on-premises location or another cloud.

    • If the outbound data transfer volume from Google Cloud toward other environments is high, consider using Cross-Cloud Interconnect.
  • To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures:

    • Implements additional security measures.
    • Shields client apps and other services from backend code changes.
    • Facilitates audit trails for communication between all cross-environment applications and its decoupled components.
    • Acts as an intermediate communication layer between legacy and modernized services.
      • Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments.
  • In some of the following cases, using Cloud Load Balancing with an API gateway can provide a robust and secure solution for managing, securing, and distributing API traffic at scale across multiple regions:

    • Deploying multi-region failover for Apigee API runtimes in different regions.
    • Increasing performance with Cloud CDN.

    • Providing WAF and DDoS protection through Google Cloud Armor.

  • Use consistent tools for logging and monitoring across cloud environments where possible. You might consider using open source monitoring systems. For more information, see Hybrid and multicloud monitoring and logging patterns.

  • If you're deploying application components in a distributed manner where the components of a single application are deployed in more than one cloud environment, see the best practices for the tiered hybrid architecture pattern.