Stay organized with collections Save and categorize content based on your preferences.

Optimize query computation

When evaluating the computation that is required by a query, consider the amount of work that is required. How much CPU time is required? Are you using functions like JavaScript user-defined functions that require additional CPU resources?

The following best practices provide guidance on controlling query computation.

Avoid repeatedly transforming data through SQL queries

Best practice: If you are using SQL to perform ETL operations, avoid situations where you are repeatedly transforming the same data.

For example, if you are using SQL to trim strings or extract data by using regular expressions, it is more performant to materialize the transformed results in a destination table. Functions like regular expressions require additional computation. Querying the destination table without the added transformation overhead is much more efficient.

Optimize your join patterns

Best practice: For queries that join data from multiple tables, optimize your join patterns. Start with the largest table.

When you create a query by using a JOIN, consider the order in which you are merging the data. The GoogleSQL query optimizer can determine which table should be on which side of the join, but it is still recommended to order your joined tables appropriately. As a best practice, place the table with the largest number of rows first, followed by the table with the fewest rows, and then place the remaining tables by decreasing size.

When you have a large table as the left side of the JOIN and a small one on the right side of the JOIN, a broadcast join is created. A broadcast join sends all the data in the smaller table to each slot that processes the larger table. It is advisable to perform the broadcast join first.

To view the size of the tables in your JOIN, see getting information about tables.

Use INT64 data types in joins to reduce cost and improve comparison performance

Best practice: If your use case supports it, use INT64 data types in joins instead of STRING data types.

BigQuery does not index primary keys like traditional databases, so the wider the join column is, the longer the comparison takes. Therefore, using INT64 data types in joins is cheaper and more efficient than using STRING data types.

Prune partitioned queries

Best practice: When querying a partitioned table, to filter with partitions on partitioned tables, use the following columns:

  • For ingestion-time partitioned tables, use the pseudo-column _PARTITIONTIME
  • For partitioned tables such as the time-unit column-based and integer-range, use the partitioning column.

For time-unit partitioned tables, filtering the data with _PARTITIONTIME or partitioning column lets you specify a date or range of dates. For example, the following WHERE clause uses the _PARTITIONTIME pseudo column to specify partitions between January 1, 2016 and January 31, 2016:

WHERE _PARTITIONTIME
BETWEEN TIMESTAMP("20160101")
AND TIMESTAMP("20160131")

The query processes data only in the partitions that are indicated by the date range. Filtering your partitions improves query performance and reduces costs.

Avoid multiple evaluations of the same Common Table Expressions (CTEs)

Best practice: Use procedural language, variables, temporary tables, and automatically expiring tables to persist calculations and use them later in the query.

When your query contains Common Table Expressions that are used in multiple places in the query, they are evaluated each time they are referenced. This may increase internal query complexity and resource consumption.

You can store the result of a CTE in a scalar variable or a temporary table depending on the data that the CTE returns. You are not charged for storage of temporary tables.

Split complex queries into multiple smaller ones

Best practice: Leverage multi-statement query capabilities and stored procedures to perform the computations that were designed as one complex query as multiple smaller and simpler queries instead.

Complex queries, REGEX functions, and layered subqueries or joins can be slow and resource intensive to run. Trying to fit all computations in one huge SELECT statement, for example to make it a view, is sometimes an antipattern, and it can result in a slow, resource-intensive query. In extreme cases the internal query plan will become so complex that BigQuery will be unable to execute it.

Splitting up a complex query allows for materializing intermediate results in variables or temporary tables. You can then use these intermediate results in other parts of the query. It is increasingly useful when these results are needed in more than one place of the query. You are not charged for storage of temporary tables.

Often it allows you to better express the true intent of parts of the query with temporary tables being the data materialization points.