Hidden gems of Google BigQuery
Chief Software Architect and Champion Innovator
Hidden gems of Google BigQuery
BigQuery is amazing. It is one of my favorite tools within Google Cloud. Luckily, it looks like Google feels the same and, to the joy of BigQuery fans, keeps adding new features there.
No matter how long I have been working with BigQuery, there is always something new I discover once in a while. Today I want to share with you the following four things:
the AUTO column,
These are great features that, in my opinion, are not paid enough attention to in documentation or talks.
This one is truly a “hidden gem”. You won’t find it in the official documentation (at least I didn’t manage to do so). It is only mentioned in Google Issue Tracker.
Let’s say you push some data into BigQuery, and then another system wants to run a scheduled job to process the newly arrived data. For example, a system can try to pull data from BigQuery to another storage, or this system needs to run hourly reports based on the data, etc. In each of those cases, you would prefer to avoid processing the same records multiple times . As a result, you need a way to know which records are already processed and which were added after the processing took place.
Unlike traditional OLTP DBs, BigQuery does not support auto-increment columns or columns that are automatically filled out with the current date-time. This is true unless you are using the Streaming API and the AUTO feature! In the latter case, you can add the TIMESTAMP column to the table schema and assign “AUTO” in your JSON payload. This allows BigQuery to auto-populate the mentioned column with the timestamp indicating when the record reached BigQuery.
While not being an OLTP DB, BigQuery supports multi-statement transactions!
Technically speaking, when I am writing this post, multi-statement transactions are in preview, but I hope this feature will become GA soon.
There is probably no need to explain what transactions are and how they can be used. More details on multi-statement transactions can be found in the official documentation.
There are however a couple of interesting things you need to pay attention to:
If a transaction mutates (updates or deletes) rows in a table, then other transactions or DML statements that mutate rows in the same table cannot run concurrently. Conflicting transactions are canceled.
Transactions cannot use DDL statements that affect permanent entities.
You cannot use the FOR SYSTEM_TIME AS OF clause to read a table beyond the timestamp of the transaction start time. Doing so returns an error.
Within a transaction, materialized views are interpreted as logical views. You can still query a materialized view inside a transaction, but it doesn’t result in any performance improvement or cost reduction compared with the equivalent logical view.
When it comes to query optimization in BigQuery (which, in the case of BigQuery, is mostly about cost optimization) the first approach that comes to mind is table partitioning.
You may have also heard about table sharding. Let’s say a few words about sharding and partitioning then. You can find information about table partitioning in the official documentation. As for table sharding, you can find it… nowhere, except for the Partitioning versus sharding section of the partitioning documentation. This is a hint that sharding is a legacy feature and, by default, partitioning should be used.
Partitioning, however, is not the only option available. There is also clustering. While partitioning divides data into partitions (allowing the query to read only specific partitions when needed), clustering works by colocating related data and allowing queries to access particular segments of the data if a filter for clustering columns is used.
So, which one is better — partitioning or clustering?
First, there is the official documentation on this topic that gives you a lot of useful recommendations.
Second, please note that partitioning is only possible based on the Time-unit and Integer columns as well as based on the ingestion time.
Clustering, in its turn, supports many other types like BOOL, GEOGRAPHY, STRING, etc. Additionally, clustering supports groups of up to 4 columns.
Finally, why not use both? You can apply both partitioning and clustering to the same table either to the same field or different fields.
It is important to note that clustering is not the same as indexes (the latter we will discuss in detail later). Applying clustering based on a specific field does not mean point lookups. It just means that records will be clustered together, and you will only be charged for querying a specific cluster.
How big are those clusters? Documentation does not specify it. If you use both partitioning and clustering based on the same field, then, of course, each cluster will not exceed the respective partition. However, if your partitions are small, then, most likely, each partition will only consist of one cluster. As a result, you won’t see any benefits from clustering.
What is the threshold for small/big partitions? Based on the documentation, it looks like this is around 1 GB. If your whole table (or each partition) is smaller than that, then, probably, clustering won’t bring any benefits.
This is another amazing feature that is currently in preview. Official documentation can be found here.
As you may guess from the name, it is designed for point lookups, but not over any field. Currently, indexes can be used to easily find unique data elements that are buried in unstructured text or semi-structured JSON data.
Indexes are only used when the SEARCH query is executed.
You can create an index over a specified list of fields. You can also specify ALL FIELDS, which will result in indexing all STRING or JSON data in the table.
Regardless of how your index was created, the SEARCH function provides a lot of flexibility that allows for selecting (or excluding) specific fields for participation in each specific query.
As with multi-statement transactions, there is not much to say about indexes. The moment you find out about their existence in BigQuery and understand their limitations, you will immediately know how to use them within your specific scenario.
Please, let me know about your favorite BigQuery features that you think have lacked attention so far. Twitter: @ArtemNikulchen1