Architecting disaster recovery for cloud infrastructure outages

This article is the sixth part of a series that discusses disaster recovery (DR) in Google Cloud. This part discusses the process for architecting workloads using Google Cloud and building blocks that are resilient to cloud infrastructure outages.

The series consists of these parts:

Introduction

As enterprises move workloads on to the public cloud, they need to translate their understanding of building resilient on-premises systems to the hyperscale infrastructure of cloud providers like Google Cloud. This article maps industry standard concepts around disaster recovery such as RTO (Recovery Time Objective) and RPO (Recovery Point Objective) to the Google Cloud infrastructure.

The guidance in this document follows one of Google's key principles for achieving extremely high service availability: plan for failure. While Google Cloud provides extremely reliable service, disasters will strike - natural disasters, fiber cuts, and complex unpredictable infrastructure failures - and these disasters cause outages. Planning for outages enables Google Cloud customers to build applications that perform predictably through these inevitable events, by making use of Google Cloud products with "built-in" DR mechanisms.

Disaster recovery is a broad topic which covers a lot more than just infrastructure failures, such as software bugs or data corruption, and you should have a comprehensive end-to-end plan. However this article focuses on one part of an overall DR plan: how to design applications that are resilient to cloud infrastructure outages. Specifically, this article walks through:

  1. The Google Cloud infrastructure, how disaster events manifest as Google Cloud outages, and how Google Cloud is architected to minimize the frequency and scope of outages.
  2. An architecture planning guide that provides a framework for categorizing and designing applications based on the desired reliability outcomes.
  3. A detailed list of select Google Cloud products that offer built-in DR capabilities which you may want to use in your application.

For further details on general DR planning and using Google Cloud as a component in your on-premises DR strategy, see the Disaster Recovery Planning Guide. Also, while High Availability is a closely related concept to disaster recovery, it is not covered in this article. For further details on architecting for high availability see the Google Cloud architecture framework.

A note on terminology: this article refers to availability when discussing the ability for a product to be meaningfully accessed and used over time, while reliability refers to a set of attributes including availability but also things like durability and correctness.

How Google Cloud is designed for resilience

Google data centers

Traditional data centers rely on maximizing availability of individual components. In the cloud, scale allows operators like Google to spread services across many components using virtualization technologies and thus exceed traditional component reliability. This means you can shift your reliability architecture mindset away from the myriad details you once worried about on-premises. Rather than worry about the various failure modes of components -- such as cooling and power delivery -- you can plan around Google Cloud products and their stated reliability metrics. These metrics reflect the aggregate outage risk of the entire underlying infrastructure. This frees you to focus much more on application design, deployment, and operations rather than infrastructure management.

Google designs its infrastructure to meet aggressive availability targets based on our extensive experience building and running modern data centers. Google is a world leader in data center design. From power to cooling to networks, each data center technology has its own redundancies and mitigations, including FMEA plans. Google's data centers are built in a way that balances these many different risks and presents to customers a consistent expected level of availability for Google Cloud products. Google uses its experience to model the availability of the overall physical and logical system architecture to ensure that the data center design meets expectations. Google's engineers take great lengths operationally to help ensure those expectations are met. Actual measured availability normally exceeds our design targets by a comfortable margin.

By distilling all of these data center risks and mitigations into user-facing products, Google Cloud relieves you from those design and operational responsibilities. Instead, you can focus on the reliability designed into Google Cloud regions and zones.

Regions and zones

Google Cloud products are provided across a large number of regions and zones. Regions are physically independent geographic areas that contain three or more zones. Zones represent groups of physical computing resources within a region that have a high degree of independence from one another in terms of physical and logical infrastructure. They provide high-bandwidth, low-latency network connections to other zones in the same region. For example, the asia-northeast1 region in Japan contains three zones: asia-northeast1-a, asia-northeast1-b, and asia-northeast1-c.

Google Cloud products are divided into zonal resources, regional resources, or multi-regional resources.

Zonal resources are hosted within a single zone. A service interruption in that zone can affect all of the resources in that zone. For example, a Compute Engine instance runs in a single, specified zone; if a hardware failure interrupts service in that zone, that Compute Engine instance is unavailable for the duration of the interruption.

Regional resources are redundantly deployed across multiple zones within a region. This gives them higher reliability relative to zonal resources.

Multi-regional resources are distributed within and across regions. In general, multi-regional resources have higher reliability than regional resources. However, at this level products must optimize availability, performance, and resource efficiency. As a result, it is important to understand the tradeoffs made by each multi-regional product you decide to use. These tradeoffs are documented on a product-specific basis later in this document.

Examples of zonal, regional, and multi-regional Google Cloud products

How to leverage zones and regions to achieve reliability

Google SREs manage and scale highly reliable, global user products like Gmail and Search through a variety of techniques and technologies that seamlessly leverage computing infrastructure around the world. This includes redirecting traffic away from unavailable locations using global load balancing, running multiple replicas in many locations around the planet, and replicating data across locations. These same capabilities are available to Google Cloud customers through products like Cloud Load Balancing, Google Kubernetes Engine, and Cloud Spanner.

Google Cloud generally designs products to deliver the following levels of availability for zones and regions:

Resource Examples Availability design goal Implied downtime
Zonal Compute Engine, Persistent Disk 99.9% 8.75 hours / year
Regional Regional Cloud Storage, Replicated Persistent Disk, Regional Google Kubernetes Engine 99.99% 52 minutes / year

Compare the Google Cloud availability design goals against your acceptable level of downtime to identify the appropriate Google Cloud resources. While traditional designs focus on improving component-level availability to improve the resulting application availability, cloud models focus instead on composition of components to achieve this goal. Many products within Google Cloud use this technique. For example, Cloud Spanner offers a multi-region database that composes multiple regions in order to deliver 99.999% availability.

Composition is important because without it, your application availability cannot exceed that of the Google Cloud products you use; in fact, unless your application never fails, it will have lower availability than the underlying Google Cloud products. The remainder of this section shows generally how you can use a composition of zonal and regional products to achieve higher application availability than a single zone or region would provide. The next section gives a practical guide for applying these principles to your applications.

Planning for zone outage scopes

Infrastructure failures usually cause service outages in a single zone. Within a region, zones are designed to minimize the risk of correlated failures with other zones, and a service interruption in one zone would usually not affect service from another zone in the same region. An outage scoped to a zone doesn't necessarily mean that the entire zone is unavailable, it just defines the boundary of the incident. It is possible for a zone outage to have no tangible effect on your particular resources in that zone.

It's a rarer occurrence, but it's also critical to note that multiple zones will eventually still experience a correlated outage at some point within a single region. When two or more zones experience an outage, the regional outage scope strategy below applies.

Regional resources are designed to be resistant to zone outages by delivering service from a composition of multiple zones. If one of the zones backing a regional resource is interrupted, the resource automatically makes itself available from another zone. Carefully check the product capability description in the appendix for further details.

Google Cloud only offers a few zonal resources, namely Compute Engine virtual machines (VMs) and Persistent Disk. If you plan to use zonal resources, you'll need to perform your own resource composition by designing, building, and testing failover and recovery between zonal resources located in multiple zones. Some strategies include:

  • Routing your traffic quickly to virtual machines in another zone using Cloud Load Balancing when a health check determines that a zone is experiencing issues.
  • Use Compute Engine instance templates and/or managed instance groups to run and scale identical VM instances in multiple zones.
  • Use a regional Persistent Disk to synchronously replicate data to another zone in a region. See High availability options using regional PDs for more details.

Planning for regional outage scopes

A regional outage is a service interruption affecting more than one zone in a single region. These are larger scale, less frequent outages and can be caused by natural disasters or large scale infrastructure failures.

For a regional product that is designed to provide 99.99% availability, an outage can still translate to nearly an hour of downtime for a particular product every year. Therefore, your critical applications may need to have a multi-region DR plan in place if this outage duration is unacceptable.

Multi-regional resources are designed to be resistant to region outages by delivering service from multiple regions. As described above, multi-region products trade off between latency, consistency, and cost. The most common trade off is between synchronous and asynchronous data replication. Asynchronous replication offers lower latency at the cost of risk of data loss during an outage. So, it is important to check the product capability description in the appendix for further details.

If you want to use regional resources and remain resilient to regional outages, then you must perform your own resource composition by designing, building, and testing their failover and recovery between regional resources located in multiple regions. In addition to the zonal strategies above, which you can apply across regions as well, consider:

  • Regional resources should replicate data to a secondary region, to a multi-regional storage option such as Cloud Storage, or a hybrid cloud option such as Anthos.
  • After you have a regional outage mitigation in place, test it regularly. There are few things worse than thinking you're resistant to a single-region outage, only to find that this isn't the case when it happens for real.

Google Cloud resilience and availability approach

Google Cloud regularly beats its availability design targets, but you should not assume that this strong past performance is the minimum availability you can design for. Instead, you should select Google Cloud dependencies whose designed-for targets exceed your application's intended reliability, such that your application downtime plus the Google Cloud downtime delivers the outcome you are seeking.

A well-designed system can answer the question: "What happens when a zone or region has a 1, 5, 10, or 30 minute outage?" This should be considered at many layers, including:

  • What will my customers experience during an outage?
  • How will I detect that an outage is happening?
  • What happens to my application during an outage?
  • What happens to my data during an outage?
  • What happens to my other applications due to an outage (due to cross-dependencies)?
  • What do I need to do in order to recover after an outage is resolved? Who does it?
  • Who do I need to notify about an outage, within what time period?

Step-by-step guide to designing disaster recovery for applications in Google Cloud

The previous sections covered how Google builds cloud infrastructure, and some approaches for dealing with zonal and regional outages.

This section helps you develop a framework for applying the principle of composition to your applications based on your desired reliability outcomes.

Step 1: Gather existing requirements

The first step is to define the availability requirements for your applications. Most companies already have some level of design guidance in this space, which may be internally developed or derived from regulations or other legal requirements. This design guidance is normally codified in two key metrics: Recovery Time Objective (RTO) and Recovery Point Objective (RPO). In business terms, RTO translates as "How long after a disaster before I'm up and running." RPO translates as "How much data can I afford to lose in the event of a disaster."

A scale showing time. Event is in the middle. On the far left is RPO with the
words, "These writes are lost." On the right is RTO with the words, "Normal
service resumes."

Historically, enterprises have defined RTO and RPO requirements for a wide range of disaster events, from component failures to earthquakes. This made sense in the on-premises world where planners had to map the RTO/RPO requirements through the entire software and hardware stack. In the cloud, you no longer need to define your requirements with such detail because the provider takes care of that. Instead, you can define your RTO and RPO requirements in terms of the scope of loss (entire zones or regions) without being specific about the underlying reasons. For Google Cloud this simplifies your requirement gathering to 3 scenarios: a zonal outage, a regional outage, or the extremely unlikely outage of multiple regions.

Recognizing that not every application has equal criticality, most customers categorize their applications into criticality tiers against which a specific RTO/RPO requirement can be applied. When taken together, RTO/RPO and application criticality streamline the process of architecting a given application by answering:

  1. Does the application need to run in multiple zones in the same region, or in multiple zones in multiple regions?
  2. On which Google Cloud products can the application depend?

This is an example of the output of the requirements gathering exercise:

RTO and RPO by Application Criticality for Example Organization Co:

Application criticality % of Apps Example apps Zone outage Region outage
Tier 1

(most important)

5% Typically global or external customer-facing applications such as real-time payments and eCommerce storefronts. RTO Zero

RPO Zero

RTO 4hrs

RPO 1hr

Tier 2 35% Typically regional applications or important internal applications such as CRM or ERP. RTO 4hrs

RPO Zero

RTO 24hrs

RPO 4hrs

Tier 3

(least important)

60% Typically team or departmental applications, such as back office, leave booking, internal travel, accounting, and HR. RTO 12hrs

RPO 24hrs

RTO 28 days

RPO 24hrs

Step 2: Capability mapping to available products

The second step is to understand the resilience capabilities of Google Cloud products that your applications will be using. Most companies review the relevant product information and then add guidance on how to modify their architectures to accommodate any gaps between the product capabilities and their resilience requirements. This section covers some common areas and recommendations around data and application limitations in this space.

As mentioned previously, Google's DR-enabled products broadly cater for two types of outage scopes: regional and zonal. Partial outages should be planned for the same way as a full outage when it comes to DR. This gives an initial high level matrix of which products are suitable for each scenario by default:

Google Cloud Product General Capabilities
(see Appendix for specific product capabilities)

All Google Cloud products Regional Google Cloud products with automatic replication across zones Multi-regional or global Google Cloud products with automatic replication across regions
Failure of a component within a zone Covered* Covered Covered
Zone outage Not covered Covered Covered
Region outage Not covered Not covered Covered

* All Google Cloud products are resilient to component failure, except in specific cases noted in product documentation. These are typically scenarios where the product offers direct access or static mapping to a piece of speciality hardware such as memory or Solid State Disks (SSD).

How RPO limits product choices

In most cloud deployments, data integrity is the most architecturally significant aspect to be considered for a service. At least some applications have an RPO requirement of zero, meaning there should be no data loss in the event of an outage. This typically requires data to be synchronously replicated to another zone or region. Synchronous replication has cost and latency tradeoffs, so while many Google Cloud products provide synchronous replication across zones, only a few provide it across regions. This cost and complexity tradeoff means that it's not unusual for different types of data within an application to have different RPO values.

For data with an RPO greater than zero, applications can take advantage of asynchronous replication. Asynchronous replication is acceptable when lost data can either be recreated easily, or can be recovered from a golden source of data if needed. It can also be a reasonable choice when a small amount of data loss is an acceptable tradeoff in the context of zonal and regional expected outage durations. It is also relevant that during a transient outage, data written to the affected location but not yet replicated to another location generally becomes available after the outage is resolved. This means that the risk of permanent data loss is lower than the risk of losing data access during an outage.

Key actions: Establish whether you definitely need RPO zero, and if so whether you can do this for a subset of your data - this dramatically increases the range of DR-enabled services available to you. In Google Cloud, achieving RPO zero means using predominantly regional products for your application, which by default are resilient to zone-scale, but not region-scale, outages.

How RTO limits product choices

One of the primary benefits of cloud computing is the ability to deploy infrastructure on demand; however, this isn't the same as instantaneous deployment. The RTO value for your application needs to accommodate the combined RTO of the Google Cloud products your application utilizes and any actions your engineers or SREs must take to restart your VMs or application components. An RTO measured in minutes means designing an application which recovers automatically from a disaster without human intervention, or with minimal steps such as pushing a button to failover. The cost and complexity of this kind of system historically has been very high, but Google Cloud products like load balancers and instance groups make this design both much more affordable and simpler. Therefore, you should consider automated failover and recovery for most applications. Be aware that designing a system for this kind of hot failover across regions is both complicated and expensive; only a very small fraction of critical services warrant this capability.

Most applications will have an RTO of between an hour and a day, which allows for a warm failover in a disaster scenario, with some components of the application running all the time in a standby mode--such as databases--while others are scaled out in the event of an actual disaster, such as web servers. For these applications, you should strongly consider automation for the scale-out events. Services with an RTO over a day are the lowest criticality and can often be recovered from a backup or recreated from scratch.

Key actions: Establish whether you definitely need an RTO of (near) zero for regional failover, and if so whether you can do this for a subset of your services. This changes the cost of running and maintaining your service.

Step 3: Develop your own reference architectures and guides

The final recommended step is building your own company-specific architecture patterns to help your teams standardize their approach to disaster recovery. Most Google Cloud customers produce a guide for their development teams that matches their individual business resilience expectations to the two major categories of outage scenarios on Google Cloud. This allows teams to easily categorize which DR-enabled products are suitable for each criticality level.

Create product guidelines

Looking again at the example RTO/RPO table from above, you have a hypothetical guide that lists which products would be allowed by default for each criticality tier. Note that where certain products have been identified as not suitable by default, you can always add your own replication and failover mechanisms to enable cross-zone or cross-region synchronization, but this exercise is beyond the scope of this article. The tables also link to more information about each product to help you understand their capabilities with respect to managing zone or region outages.

Sample Architecture Patterns for Example Organization Co -- Zone Outage Resilience:

Google Cloud Product Does product meet zonal outage requirements for Example Organization (with appropriate product configuration)
Tier 1 Tier 2 Tier 3
Compute Engine Yes Yes Yes
Dataflow No No No
BigQuery No No Yes
Google Kubernetes Engine Yes Yes Yes
Cloud Storage Yes Yes Yes
Cloud SQL No Yes Yes
Cloud Spanner Yes Yes Yes
Cloud Load Balancing Yes Yes Yes

This table is an example only based on hypothetical tiers shown above.

Sample Architecture Patterns for Example Organization Co -- Region Outage Resilience:

Google Cloud Product Does product meet region outage requirements for Example Organization (with appropriate product configuration)
Tier 1 Tier 2 Tier 3
Compute Engine Yes Yes Yes
Dataflow No No No
BigQuery No No Yes
Google Kubernetes Engine Yes Yes Yes
Cloud Storage No No No
Cloud SQL Yes Yes Yes
Cloud Spanner Yes Yes Yes
Cloud Load Balancing Yes Yes Yes

This table is an example only based on hypothetical tiers shown above.

To show how these products would be used, the following sections walk through some reference architectures for each of the hypothetical application criticality levels. These are deliberately high level descriptions to illustrate the key architectural decisions, and aren't representative of a complete solution design.

Example tier 3 architecture

Application criticality Zone outage Region outage
Tier 3
(least important)
RTO 12 hours
RPO 24 hours
RTO 28 days
RPO 24 hours

An example tier 3 architecture using Google Cloud products

(Greyed-out icons indicate infrastructure to be enabled for recovery)

This architecture describes a traditional client/server application: internal users connect to an application running on a compute instance which is backed by a database for persistent storage.

It's important to note that this architecture supports better RTO and RPO values than required. However, you should also consider eliminating additional manual steps when they could prove costly or unreliable. For example, recovering a database from a nightly backup could support the RPO of 24 hours, but this would usually need a skilled individual such as a database administrator who might be unavailable, especially if multiple services were impacted at the same time. With Google Cloud's on demand infrastructure you are able to build this capability without making a major cost tradeoff, and so this architecture uses Cloud SQL HA rather than a manual backup/restore for zonal outages.

Key architectural decisions for zone outage - RTO of 12hrs and RPO of 24hrs:

  • An internal load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone. Even though the RTO is 12 hours, manual changes to IP addresses or even DNS updates can take longer than expected.
  • A regional managed instance group is configured with multiple zones but minimal resources. This optimizes for cost but still allows for virtual machines to be quickly scaled out in the backup zone.
  • A high availability Cloud SQL configuration provides for automatic failover to another zone. Databases are significantly harder to recreate and restore compared to the Compute Engine virtual machines.

Key architectural decisions for region outage - RTO of 28 Days and RPO of 24 hours:

  • A load balancer would be constructed in region 2 only in the event of a regional outage. Cloud DNS is used to provide an orchestrated but manual regional failover capability, since the infrastructure in region 2 would only be made available in the event of a region outage.
  • A new managed instance group would be constructed only in the event of a region outage. This optimizes for cost and is unlikely to be invoked given the short length of most regional outages. Note that for simplicity the diagram doesn't show the associated tooling needed to redeploy, or the copying of the Compute Engine images needed.
  • A new Cloud SQL instance would be recreated and the data restored from a backup. Again the risk of an extended outage to a region is extremely low so this is another cost optimization trade-off.
  • Multi-regional Cloud Storage is used to store these backups. This provides automatic zone and regional resilience within the RTO and RPO.

Example tier 2 architecture

Application criticality Zone outage Region outage
Tier 2 RTO 4 hours
RPO zero
RTO 24 hours
RPO 4 hours

An example tier 2 architecture using Google Cloud products

This architecture describes a data warehouse with internal users connecting to a compute instance visualization layer, and a data ingest and transformation layer which populates the backend data warehouse.

Some individual components of this architecture do not directly support the RPO required for their tier; however, because of how they are used together, the overall service does meet the RPO. In this case, since Dataflow is a zonal product, recommendations for high availability design should be followed to prevent data loss during a zonal outage. However, the Cloud Storage layer is the golden source of this data and supports an RPO of zero. This means any lost data can be re-ingested into BigQuery via zone b in the event of an outage in zone a.

Key architectural decisions for zone outage - RTO of 4hrs and RPO of zero:

  • A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone. Even though the RTO is 4 hours, manual changes to IP addresses or even DNS updates can take longer than expected.
  • A regional managed instance group for the data visualization compute layer is configured with multiple zones but minimal resources. This optimizes for cost but still allows for virtual machines to be quickly scaled out.
  • Regional Cloud Storage is used as a staging layer for the initial ingest of data, providing automatic zone resilience.
  • Dataflow is used to extract data from Cloud Storage and transform it before loading it into BigQuery. In the event of a zone outage this is a stateless process that can be restarted in another zone.
  • BigQuery provides the data warehouse backend for the data visualization front end. In the event of a zone outage, any data lost would be re-ingested from Cloud Storage.

Key architectural decisions for region outage - RTO of 24hrs and RPO of 4 hours:

  • A load balancer in each region is used to provide a scalable access point for users. Cloud DNS is used to provide an orchestrated but manual regional failover capability, since the infrastructure in region 2 would only be made available in the event of a region outage.
  • A regional managed instance group for the data visualization compute layer is configured with multiple zones but minimal resources. This isn't accessible until the load balancer is reconfigured but doesn't require manual intervention otherwise.
  • Regional Cloud Storage is used as a staging layer for the initial ingest of data. This is being loaded at the same time into both regions to meet the RPO requirements.
  • Dataflow is used to extract data from Cloud Storage and transform it before loading it into BigQuery. In the event of a region outage this would populate BigQuery with the latest data from Cloud Storage.
  • BigQuery provides the data warehouse backend. Under normal operations this would be intermittently refreshed. In the event of a region outage the latest data would be re-ingested via Dataflow from Cloud Storage.

Example tier 1 architecture

Application criticality Zone outage Region outage
Tier 1
(most important)
RTO zero
RPO zero
RTO 4 hours
RPO 1 hour

An example tier 1 architecture using Google Cloud products

This architecture describes a mobile app backend infrastructure with external users connecting to a set of microservices running in Google Kubernetes Engine. Cloud Spanner provides the backend data storage layer for real time data, and historical data is streamed to a BigQuery data lake in each region.

Again, some individual components of this architecture do not directly support the RPO required for their tier, but because of how they are used together the overall service does. In this case BigQuery is being used for analytic queries. Each region is fed simultaneously from Cloud Spanner.

Key architectural decisions for zone outage - RTO of zero and RPO of zero:

  • A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone.
  • A regional Google Kubernetes Engine cluster is used for the application layer which is configured with multiple zones. This accomplishes the RTO of zero within each region.
  • Multi-region Cloud Spanner is used as a data persistence layer, providing automatic zone data resilience and transaction consistency.
  • BigQuery provides the analytics capability for the application. Each region is independently fed data from Cloud Spanner, and independently accessed by the application.

Key architectural decisions for region outage - RTO of 4 hrs and RPO of zero:

  • A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another region.
  • A regional Google Kubernetes Engine cluster is used for the application layer which is configured with multiple zones. In the event of a region outage, the cluster in the alternate region automatically scales to take on the additional processing load.
  • Multi-region Cloud Spanner is used as a data persistence layer, providing automatic regional data resilience and transaction consistency. This is the key component in achieving the cross region RPO of zero.
  • BigQuery provides the analytics capability for the application. Each region is independently fed data from Cloud Spanner, and independently accessed by the application. This architecture compensates for the BigQuery component allowing it to match the overall application requirements.

Appendix: Product reference

This section describes the architecture and DR capabilities of Google Cloud products that are most commonly used in customer applications and that can be easily leveraged to achieve your DR requirements.

Common themes

Many Google Cloud products offer regional or multi-regional configurations. Regional products are resilient to zone outages, and multi-region and global products are resilient to region outages. In general, this means that during an outage, your application experiences minimal disruption. Google achieves these outcomes through a few common architectural approaches, which mirror the architectural guidance above.

  • Redundant deployment: The application backends and data storage are deployed across multiple zones within a region and multiple regions within a multi-region location.
  • Data replication: Products use either synchronous or asynchronous replication across the redundant locations.

    • Synchronous replication means that when your application makes an API call to create or modify data stored by the product, it receives a successful response only once the product has written the data to multiple locations. Synchronous replication ensures that you do not lose access to any of your data during a Google Cloud infrastructure outage because all of your data is available in one of the available backend locations.

      Although this technique provides maximum data protection, it can have tradeoffs in terms of latency and performance. Multi-region products using synchronous replication experience this tradeoff most significantly -- typically on the order of 10s or 100s of milliseconds of added latency.

    • Asynchronous replication means that when your application makes an API call to create or modify data stored by the product, it receives a successful response once the product has written the data to a single location. Subsequent to your write request, the product replicates your data to additional locations.

      This technique provides lower latency and higher throughput at the API than synchronous replication, but at the expense of data protection. If the location in which you have written data suffers an outage before replication is complete, you lose access to that data until the location outage is resolved.

  • Handling outages with load balancing: Google Cloud uses software load balancing to route requests to the appropriate application backends. Compared to other approaches like DNS load balancing, this approach reduces the system response time to an outage. When a Google Cloud location outage occurs, the load balancer quickly detects that the backend deployed in that location has become "unhealthy" and directs all requests to a backend in an alternate location. This enables the product to continue serving your application's requests during a location outage. When the location outage is resolved, the load balancer detects the availability of the product backends in that location, and resumes sending traffic there.

Compute Engine

Compute Engine is Google Cloud's infrastructure-as-a-service. It uses Google's worldwide infrastructure to offer virtual machines (and related services) to customers.

Compute Engine instances are zonal resources, so in the event of a zone outage instances are unavailable by default. Compute Engine does offer managed instance groups (MIGs) which can automatically scale up additional VMs from pre-configured instance templates, both within a single zone and across multiple zones within a region. MIGs are ideal for applications that require resilience to zone loss and are stateless, but require configuration and resource planning. Multiple regional MIGs can be used to achieve region outage resilience for stateless applications.

Applications that have stateful workloads can still use stateful MIGs (beta) but extra care needs to be made in capacity planning since they do not scale horizontally. It's important in either scenario to correctly configure and test Compute Engine instance templates and MIGs ahead of time to ensure working failover capabilities to other zones. See the Architecture Patterns section above for more information.


Dataproc

Dataproc provides streaming and batch data processing capabilities. Dataproc is architected as a regional control plane that enables users to manage Dataproc clusters. The control plane does not depend on an individual zone in a given region. Therefore, during a zonal outage, you retain access to the Dataproc APIs, including the ability to create new clusters.

Clusters are run in Compute Engine. Because the cluster is a zonal resource, a zonal outage makes the cluster unavailable, or destroys the cluster. Dataproc does not automatically snapshot cluster status, so a zone outage could cause loss of data being processed. Dataproc does not persist user data within the service. Users can configure their pipelines to write results to many data stores; you should consider the architecture of the data store and choose a product that offers the required disaster resilience.

If a zone suffers an outage, you may choose to recreate a new instance of the cluster in another zone, either by selecting a different zone or using the Auto Placement feature in Dataproc to automatically select an available zone. Once the cluster is available, data processing can resume. You can also run a cluster with High Availability mode enabled, reducing the likelihood a partial zone outage will impact a master node and, therefore, the whole cluster.


Dataflow

Dataflow is Google Cloud's fully managed and serverless data processing service for streaming and batch pipelines. Dataflow jobs are zonal in nature, and in default configuration do not preserve intermediate calculation results during a zonal outage. A typical recovery approach for such default Dataflow pipelines is restarting a job in a different zone or region and reprocessing the input data again.

Architecting Dataflow pipelines for high availability

In case of a zone or region outage, you can avoid data loss by reusing the same subscription to the Pub/Sub topic. As part of the exactly-once guarantee of Dataflow, Dataflow only acks the messages in Pub/Sub if they were persisted in the destination, or if a message went through a grouping/time-windowing operation and was saved in Dataflow's durable pipeline state. If there are no grouping/time-windowing operations, a failover to another Dataflow job in another zone or region by reusing the subscription leads to no data loss in pipeline output data.

If the pipeline is using grouping or time-windowing, you can use the Seek functionality of Pub/Sub or Replay functionality of Kafka after a zonal or regional outage to reprocess data elements to arrive at the same calculation results. The data loss of pipeline outputs can be minimized all the way down to 0 elements, if the business logic used by the pipeline does not rely on data before the outage. If the pipeline business logic does rely on data that was processed before the outage (for example, if long sliding windows are used, or if a global time window is storing ever-increasing counters), Dataflow offers a snapshotting feature (currently in Preview) that provides a snapshot backup of a pipeline's state.


BigQuery

BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse designed for business agility. BigQuery supports two different availability-related configuration options for user datasets.

Single region configuration

In a single region configuration, data is stored redundantly in two zones within a single region. Data written to BigQuery is first written to the primary zone and then asynchronously replicated to a secondary zone. This protects against unavailability of a single zone within the region. Data that was written to the primary zone but that hasn't been replicated to the secondary zone at the time of a zone outage is unavailable until the outage is resolved. In the unlikely case of a zone being destroyed, that data may be permanently lost.

Multi-region (US / EU) configuration

Similar to the single region configuration, in the US / EU multi-region configuration, data is stored redundantly in two zones within one region. In addition, BigQuery keeps an additional backup copy of the data in a second region. If the primary region experiences an outage, data is served from the secondary region. Data that hasn't been replicated is unavailable until the primary region is restored.


Google Kubernetes Engine

Google Kubernetes Engine (GKE) offers managed Kubernetes service by streamlining the deployment of containerized applications on Google Cloud. You can choose between regional or zonal cluster topologies.

  • When creating a zonal cluster, GKE provisions one control plane machine in the chosen zone, as well as worker machines (nodes) within the same zone.
  • For regional clusters, GKE provisions three control plane machines in three different zones within the chosen region. By default, nodes are also spanned across three zones, though you can choose to create a regional cluster with nodes provisioned only in one zone.
  • Multi-zonal clusters are similar to zonal clusters as they include one master machine, but additionally offer the ability to span nodes across multiple zones.

Zonal Outage: To avoid zonal outages, use regional clusters. The control plane and the nodes are distributed across three different zones within a region. A zone outage does not impact control plane and worker nodes deployed in the other two zones.

Regional Outage: Mitigation of a regional outage requires deployment across multiple regions. Although currently not being offered as a built-in product capability, multi-region topology is an approach taken by several GKE customers today, and can be manually implemented. You can create multiple regional clusters to replicate your workloads across multiple regions, and control the traffic to these clusters using multi-cluster ingress.


Cloud Key Management Service

Cloud Key Management Service (Cloud KMS) provides scalable and highly-durable cryptographic key resource management. Cloud KMS stores all of its data and metadata in Cloud Spanner databases which provide high data durability and availability with synchronous replication.

Cloud KMS resources can be created in a single region, multiple regions, or globally.

In the case of zonal outage, Cloud KMS continues to serve requests from another zone in the same or different region without interruption. Because data is replicated synchronously, there is no data loss or corruption. When the zone outage is resolved, full redundancy is restored.

In the case of a regional outage, regional resources in that region are offline until the region becomes available again. Note that even within a region, at least 3 replicas are maintained in separate zones. When higher availability is required, resources should be stored in a multi-region or global configuration. Multi-region and global configurations are designed to stay available through a regional outage by geo-redundantly storing and serving data in more than one region.


Cloud Identity

Cloud Identity services are distributed across multiple regions and use dynamic load balancing. Cloud Identity does not allow users to select a resource scope. If a particular zone or region experiences an outage, traffic is automatically distributed to other zones or regions.

Persistent data is mirrored in multiple regions with synchronous replication in most cases. For performance reasons, a few systems, such as caches or changes affecting large numbers of entities, are asynchronously replicated across regions. If the primary region in which the most current data is stored experiences an outage, Cloud Identity serves stale data from another location until the primary region becomes available.


Persistent Disk

Persistent Disks are available in zonal and regional configurations.

Zonal Persistent Disks are hosted in a single zone. If the disk's zone is unavailable, the Persistent Disk is unavailable until the zone outage is resolved.

Regional Persistent Disks provide synchronous replication of data between two zones in a region. In the event of an outage in your virtual machine's zone, you can force attach a regional Persistent Disk to a VM instance in the disk's secondary zone. To perform this task, you must either start another VM instance in that zone or maintain a hot standby VM instance in that zone.


Cloud Storage

Cloud Storage provides globally unified, scalable, and highly durable object storage. Cloud Storage buckets can be created in a single region, dual regions, or multi-regions within a continent.

If a zone experiences an outage, data in the unavailable zone is automatically and transparently served from elsewhere in the region. Data and metadata are stored redundantly across zones, starting with the initial write. No writes are lost upon a zone becoming unavailable.

In the case of a regional outage, regional buckets in that region are offline until the region becomes available again. When higher availability is required, you should consider storing data in a dual-region or multi-region configuration. Dual-regions and multi-regions are designed to stay available through a regional outage by geo-redundantly storing data in more than one region using asynchronous replication.

Cloud Storage uses Cloud Load Balancing to serve dual- and multi-region buckets from multiple regions. In the case of a regional outage, serving is not interrupted. Because geo-redundancy is achieved asynchronously, some recently written data may not be accessible during the outage and could be lost in the case of physical destruction of the data in the affected region.


Container Registry

Container Registry provides a scalable hosted Docker Registry implementation that securely and privately stores Docker container images. Container Registry implements the HTTP Docker Registry API.

Container Registry is a global service that synchronously stores image metadata redundantly across multiple zones and regions by default. Container images are stored in Cloud Storage multi-regional buckets. With this storage strategy, Container Registry provides zonal outage resilience in all cases, and regional outage resilience for any data that has been asynchronously replicated to multiple regions by Cloud Storage.


Pub/Sub

Pub/Sub is a messaging service for application integration and stream analytics. Pub/Sub topics are global, meaning that they are visible and accessible from any Google Cloud location. However, any given message is stored in a single Google Cloud region, closest to the publisher and allowed by the resource location policy. Thus, a topic may have messages stored in different regions throughout Google Cloud. The Pub/Sub message storage policy can restrict the regions in which messages are stored.

Zonal outage: When a Pub/Sub message is published, it is synchronously written to storage in at least two zones within the region. Therefore, if a single zone becomes unavailable, there is no customer-visible impact.

Regional outage: During a region outage, messages stored within the affected region are inaccessible. Administrative operations, such as creation and deletion of topics and subscriptions, are multi-regional and resilient to an outage of any single Google Cloud region. Publishing operations are also resilient to region outages, provided that at least one region allowed by the message storage policy is available and your application uses the global endpoint (pubsub.googleapis.com) or multiple regional endpoints. By default, Pub/Sub does not restrict message storage location.

If your application relies on message ordering, please review detailed recomendations from the Pub/Sub team. Message ordering guarantees are provided on a per-region basis, and can become disrupted if you use a global endpoint.


Cloud Composer

Cloud Composer is Google Cloud's managed Apache Airflow implementation. Cloud Composer is architected as a regional control plane that enables users to manage Cloud Composer clusters. The control plane does not depend on an individual zone in a given region. Therefore, during a zonal outage, you retain access to the Cloud Composer APIs, including the ability to create new clusters.

Clusters are run in Google Kubernetes Engine. Because the cluster is a zonal resource, a zonal outage makes the cluster unavailable, or destroys the cluster. Workflows that are currently executing at the time of the outage may be stopped before completion. This means that the outage causes loss of state of partially executed workflows, including any external actions that the workflow was configured by the user to accomplish. Depending on the workflow, this could cause inconsistencies externally, such as if the workflow stops in the middle of a multi-step execution to modify external data stores. Therefore, you should consider the recovery process when designing your Airflow workflow, including how to detect partially unexecuted workflow states, repair any partial data changes, and so on.

During a zone outage, you may choose to use Cloud Composer to start a new instance of the cluster in another zone. Once the cluster is available, the workflow can resume. Depending on your preferences, you may wish to run an idle replica cluster in another zone and switch to that replica in the event of a zonal outage.


Cloud SQL

Cloud SQL is a fully managed relational database service for MySQL, PostgreSQL, and SQL Server. Cloud SQL uses managed Compute Engine virtual machines to run the database software. It offers a high availability configuration for regional redundancy, protecting the database from a zone outage. Cross-region replicas can be provisioned to protect the database from a region outage. Because the product also offers a zonal option, which is not resilient to a zone or region outage, you should be careful to select the high availability configuration, cross-region replicas, or both.

Zone outage: The high availability option creates a primary and standby VM instance in two separate zones within one region. During normal operation, the primary VM instance serves all requests, writing database files to a Regional Persistent Disk, which is synchronously replicated to the primary and standby zones. If a zone outage affects the primary instance, Cloud SQL initiates a failover during which the Persistent Disk is attached to the standby VM and traffic is rerouted.

During this process, the database must be initialized, which includes processing any transactions written to the transaction log but not applied to the database. The number and type of unprocessed transactions can increase the RTO time. High recent writes can lead to a backlog of unprocessed transactions. The RTO time is most heavily impacted by (a) high recent write activity and (b) recent changes to database schemas.

Finally, when the zonal outage has been resolved, you can manually trigger a failback operation to resume serving in the primary zone.

For more details on the high availability option, please see the Cloud SQL high availability documentation.

Region outage: The cross-region replica option protects your database from regional outages by creating read replicas of your primary instance in other regions. The cross-region replication uses asynchronous replication, which allows the primary instance to commit transactions before they are committed on replicas. The time difference between when a transaction is committed on the primary instance and when it is committed on the replica is known as "replication lag" (which can be monitored). This metric reflects both transactions which have not been sent from the primary to replicas, as well as transactions that have been received but have not been processed by the replica. Transactions not sent to the replica would become unavailable during a regional outage. Transactions received but not processed by the replica impact the recovery time, as described below.

Cloud SQL recommends testing your workload with a configuration that includes a replica to establish a "safe transactions per second (TPS)" limit, which is the highest sustained TPS that doesn't accumulate replication lag. If your workload exceeds the safe TPS limit, replication lag accumulates, negatively affecting RPO and RTO values. As general guidance, avoid using small instance configurations (<2 vCPU cores, <100GB disks, or PD-HDD), which are susceptible to replication lag.

In the event of a regional outage, you must decide whether to manually promote a read replica. This is a manual operation because promotion can cause a split brain scenario in which the promoted replica accepts new transactions despite having lagged the primary instance at the time of the promotion. This can cause problems when the regional outage is resolved and you must reconcile the transactions that were never propagated from the primary to replica instances. If this is problematic for your needs, you may consider a cross-region synchronous replication database product like Cloud Spanner.

Once triggered by the user, the promotion process follows steps similar to the activation of a standby instance in the high availability configuration: The read replica must process the transaction log and the load balancer must redirect traffic. Processing the transaction log drives the total recovery time.

For more details on the cross-region replica option, please see the Cloud SQL cross-region replica documentation.


Cloud Logging

Cloud Logging consists of two main parts: the Logs Router and Cloud Logging storage.

The Logs Router handles streaming log events and directs the logs to Cloud Storage, Pub/Sub, BigQuery, or Cloud Logging storage.

Cloud Logging storage is a service for storing, querying, and managing compliance for logs. It supports many users and workflows including development, compliance, troubleshooting, and proactive alerting.

Logs Router & incoming logs: During a zonal outage, the Cloud Logging API routes logs to other zones in the region. Normally, logs being routed by the Logs Router to Cloud Logging, BigQuery, or Pub/Sub are written to their end destination as soon as possible, while logs sent to Cloud Storage are buffered and written in batches hourly.

Log Entries: In the event of a zonal or regional outage, log entries that have been buffered in the affected zone or region and not written to the export destination become inaccessible. Logs-based metrics are also calculated in the Logs Router and subject to the same constraints. Once delivered to the selected log export location, logs are replicated according to the destination service. Logs that are exported to Cloud Logging storage are synchronously replicated across two zones in a region. For the replication behavior of other destination types, please see the relevant section in this article.

Log Metadata: Metadata such as sink and exclusion configuration is stored globally but cached regionally so in the event of an outage, the regional Log Router instances would operate. Single region outages have no impact outside of the region.