Using cached query results

BigQuery writes all query results to a table. The table is either explicitly identified by the user (a destination table), or it is a temporary, cached results table. Temporary, cached results tables are maintained per-user, per-project. There are no storage costs for temporary tables, but if you write query results to a permanent table, you are charged for storing the data.

All query results, including both interactive and batch queries, are cached in temporary tables for approximately 24 hours with some exceptions.


Using the query cache is subject to the following limitations:

  • When you run a duplicate query, BigQuery attempts to reuse cached results. To retrieve data from the cache, the duplicate query text must be exactly the same as the original query.
  • For query results to persist in a cached results table, the result set must be smaller than the maximum response size. For more information about managing large result sets, see Returning large query results.
  • You cannot target cached result tables with DML statements.
  • Although current semantics allow it, the use of cached results as input for dependent jobs is strongly discouraged. For example, you should not submit query jobs that retrieve results from the cache table. Instead, write your results to a named destination table. To enable easy cleanup, features such as the dataset level defaultTableExpirationMs property can expire the data automatically after a given duration.

Pricing and quotas

When query results are retrieved from a cached results table, the job statistics property statistics.query.cacheHit returns as true, and you are not charged for the query. Though you are not charged for queries that use cached results, the queries are subject to the BigQuery quota policies. In addition to reducing costs, queries that use cached results are significantly faster because BigQuery does not need to compute the result set.

Exceptions to query caching

Query results are not cached:

  • When a destination table is specified in the job configuration, the Cloud Console, the bq command-line tool, or the API
  • If any of the referenced tables or logical views have changed since the results were previously cached
  • When any of the tables referenced by the query have recently received streaming inserts (a streaming buffer is attached to the table) even if no new rows have arrived
  • If the query uses non-deterministic functions; for example, date and time functions such as CURRENT_TIMESTAMP() and NOW(), and other functions such as CURRENT_USER() return different values depending on when a query is executed
  • If you are querying multiple tables using a wildcard
  • If the cached results have expired; typical cache lifetime is 24 hours, but the cached results are best-effort and may be invalidated sooner
  • If the query runs against an external data source

How cached results are stored

When you run a query, a temporary, cached results table is created in a special dataset referred to as an "anonymous dataset". Unlike regular datasets which inherit permissions from the IAM resource hierarchy model (project and organization permissions), access to anonymous datasets is restricted to the dataset owner. The owner of an anonymous dataset is the user who ran the query that produced the cached result.

When an anonymous dataset is created, the user that runs the query job is explicitly given bigquery.dataOwner access to the anonymous dataset. bigquery.dataOwner access gives only the user who ran the query job full control over the dataset. This includes full control over the cached results tables in the anonymous dataset. If you intend to share query results, do not use the cached results stored in an anonymous dataset. Instead, write the results to a named destination table.

Though the user that runs the query has full access to the dataset and the cached results table, using them as inputs for dependent jobs is strongly discouraged.

The names of anonymous datasets begin with an underscore. This hides them from the datasets list in the Cloud Console. You can list anonymous datasets and audit anonymous dataset access controls by using the bq command-line tool or the API.

Disabling retrieval of cached results

The Use cached results option reuses results from a previous run of the same query unless the tables being queried have changed. Using cached results is only beneficial for repeated queries. For new queries, the Use cached results option has no effect, though it is enabled by default.

When you repeat a query with the Use cached results option disabled, the existing cached result is overwritten. This requires BigQuery to compute the query result, and you are charged for the query. This is particularly useful in benchmarking scenarios.

If you want to disable retrieving cached results and force live evaluation of a query job, you can set the configuration.query.useQueryCache property of your query job to false.

To disable the Use cached results option:


  1. Open the Cloud Console.
    Go to the BigQuery page

  2. Click Compose new query.

  3. Enter a valid SQL query in the Query editor text area.

  4. Click More and select Query settings.

    Query settings

  5. Under Cache preference, uncheck Use cached results.

    Cached results option


Use the nouse_cache flag to overwrite the query cache. The following example forces BigQuery to process the query without using the existing cached results:

 bq query \
 --nouse_cache \
 --batch \
    gender = "M"
    count DESC


To process a query without using the existing cached results, set the useQueryCache property to false in the query job configuration.


Before trying this sample, follow the Go setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Go API reference documentation.

import (


// queryDisableCache demonstrates issuing a query and requesting that the query cache is bypassed.
func queryDisableCache(w io.Writer, projectID string) error {
	// projectID := "my-project-id"
	ctx := context.Background()
	client, err := bigquery.NewClient(ctx, projectID)
	if err != nil {
		return fmt.Errorf("bigquery.NewClient: %v", err)
	defer client.Close()

	q := client.Query(
		"SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;")
	q.DisableQueryCache = true
	// Location must match that of the dataset(s) referenced in the query.
	q.Location = "US"

	// Run the query and print results when the query job is completed.
	job, err := q.Run(ctx)
	if err != nil {
		return err
	status, err := job.Wait(ctx)
	if err != nil {
		return err
	if err := status.Err(); err != nil {
		return err
	it, err := job.Read(ctx)
	for {
		var row []bigquery.Value
		err := it.Next(&row)
		if err == iterator.Done {
		if err != nil {
			return err
		fmt.Fprintln(w, row)
	return nil


To process a query without using the existing cached results, set use query cache to false when creating a QueryJobConfiguration.


// Sample to running a query with the cache disabled.
public class QueryDisableCache {

  public static void runQueryDisableCache() {
    String query = "SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;";

  public static void queryDisableCache(String query) {
    try {
      // Initialize client that will be used to send requests. This client only needs to be created
      // once, and can be reused for multiple requests.
      BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

      QueryJobConfiguration queryConfig =
              // Disable the query cache to force live query evaluation.

      TableResult results = bigquery.query(queryConfig);

          .forEach(row -> row.forEach(val -> System.out.printf("%s,", val.toString())));

      System.out.println("Query disable cache performed successfully.");
    } catch (BigQueryException | InterruptedException e) {
      System.out.println("Query not performed \n" + e.toString());


Before trying this sample, follow the Node.js setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Node.js API reference documentation.

// Import the Google Cloud client library
const {BigQuery} = require('@google-cloud/bigquery');

async function queryDisableCache() {
  // Queries the Shakespeare dataset with the cache disabled.

  // Create a client
  const bigquery = new BigQuery();

  const query = `SELECT corpus
    FROM \`bigquery-public-data.samples.shakespeare\`
    GROUP BY corpus`;
  const options = {
    query: query,
    // Location must match that of the dataset(s) referenced in the query.
    location: 'US',
    useQueryCache: false,

  // Run the query as a job
  const [job] = await bigquery.createQueryJob(options);
  console.log(`Job ${} started.`);

  // Wait for the query to finish
  const [rows] = await job.getQueryResults();

  // Print the results
  rows.forEach(row => console.log(row));


Before trying this sample, follow the Python setup instructions in the BigQuery Quickstart Using Client Libraries. For more information, see the BigQuery Python API reference documentation.

from import bigquery

# Construct a BigQuery client object.
client = bigquery.Client()

job_config = bigquery.QueryJobConfig(use_query_cache=False)
sql = """
    SELECT corpus
    FROM `bigquery-public-data.samples.shakespeare`
    GROUP BY corpus;
query_job = client.query(sql, job_config=job_config)  # Make an API request.

for row in query_job:

Ensuring use of the cache

If you use the jobs.insert function to run a query, you can force a query job to fail unless cached results can be used by setting the createDisposition property of the copy job configuration to CREATE_NEVER.

If the query result does not exist in the cache, a NOT_FOUND error is returned.

Verifying use of the cache

There are two ways to determine if BigQuery returned a result using the cache:

  • If you are using the Cloud Console, the result string does not contain information about the number of processed bytes, and displays the word cached.

    Cache indicator in the Cloud Console.

  • If you are using the BigQuery API, the cacheHit property in the query result is set to true.

Impact of Column-level security

By default, BigQuery caches query results for 24 hours, with the exceptions noted previously. The 24 hour cache also applies for queries on data that is protected by Column-level security, which uses policy tags. A change such as removing a group or a user from the Data Catalog Fine Grained Reader role used for a policy tag does not invalidate the 24 hour cache. A change to the Data Catalog Fine Grained Reader access control group itself is propagated immediately, but the change does not invalidate the cache.

The impact is if a user ran a query, the query results remain visible to the user on screen. The user can also retrieve those results from the cache even if they lost access to the data within the last 24 hours.

During the 24 hours after a user was removed from the Data Catalog Fine Grained Reader role for a policy tag, the user has access to the cached data only for data that the user was previously allowed to see. If rows are added to the table, the user will not see the added rows, even if the results are cached.