Jump to Content
Data Analytics

BigQuery under the hood: How zone assignments work

February 12, 2020
Purujit Saha

Staff Software Engineer, BigQuery Storage

BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse. It’s designed to be flexible and easy to use. There are lots of interesting features and design decisions made when creating BigQuery, and we’ll dive into how zone assignments work in this post. Like other Google Cloud services, BigQuery takes advantage of our global cloud regions to make sure your data is available when you need it.

One key aspect of BigQuery's architecture is that it is multi-tenant; it runs workloads from different customers on compute and storage infrastructure that does not require any customer involvement in capacity planning. When loading data into BigQuery, customers choose a region to load the data and, optionally, purchase a compute reservation. Then, the service takes care of provisioning.

Zone assignments keep your data flowing

Each BigQuery region is internally deployed in multiple availability zones. Customer data is replicated between these zones, and there is fast automatic failover to the secondary zone if the primary zone is experiencing issues. The failover is designed to be transparent to customers and have no downtime. We’re always expanding our capacity footprint to support customer growth and onboard new customers. To make sure that each customer gets to think of storage as infinite, and gets sufficient compute resources to load and analyze their data, we continuously recompute the best placement for the primary and secondary zones in the region.

To ensure the best primary and secondary zone assignments, the assignment algorithm takes into account the storage and compute usage of each customer and the available capacity in each zone. Then it makes sure that the usage will fit in the currently assigned zones. If it doesn’t, it finds another suitable zone for that customer and orchestrates a move from their current zone to a new zone. All of this happens in the background without causing any disruptions to your workload.

Any datasets that share the same region can be joined together in a single query. To ensure good query performance, we attempt to colocate compute and storage so that I/O is within the same zone in order to take advantage of high-throughput networking within the zone. I/O bandwidth within a zone is very high (Google's Jupiter network fabric can sustain more than 1 petabit/second of total bisection bandwidth), but network capacity between zones is much more constrained. Our assignment algorithm makes sure that Google Cloud projects within the same Google Cloud organization are assigned to the same subset of zones in every region. To support very large orgs, we compute project cliques based on cross-project query patterns within the organization. That breaks it up into more manageable chunks that can be placed separately. To handle cross-org reads, the algorithm also looks into past query patterns to discover relationships between organizations and make an effort to have at least one common zone between orgs that are related. The query engine also allows reading small amounts of data not being colocated by either reading remotely or copying some data to the compute zone before running the query. In rare cases, when the algorithm cannot ensure this or there is a new cross-org query pattern, queries that read large amounts of data may fail.

Best practices for organizing and moving your data

  • To get the best performance for workloads that read/write data in datasets belonging to different projects, ensure that the projects are in the same Google Cloud org.

  • If you want to make your data available to other BigQuery users in your Google Cloud organization, you can use IAM permissions to grant access. 

  • You can move data across regions using the dataset copy feature.

Learn more about regions and zones in Google Cloud services, and find more details on how Compute Engine handles zones.

Posted in