Troubleshoot query errors

This document is intended to help you troubleshoot the most common errors returned by failed queries.

Avro schema resolution

Error string: Cannot skip stream

This error can occur when loading multiple Avro files with different schemas, resulting in a schema resolution issue and causing the import job to fail at a random file.

To address this error, ensure that the last alphabetical file in the load job contains the superset (union) of the differing schemas. This is a requirement based on how Avro handles schema resolution.

Conflicting concurrent queries

Error string: Concurrent jobs in the same session are not allowed

This error can occur when multiple queries are running concurrently in a session, which is not supported. See session limitations.

Conflicting DML statements

Error string: Could not serialize access to table due to concurrent update

This error can occur when mutating data manipulation language (DML) statements that are running concurrently on the same table conflict with each other, or when the table is truncated during a mutating DML statement. For more information, see DML statement conflicts.

To address this error, run DML operations that affect a single table such that they don't overlap.

Insufficient column-level access control permissions

Error string: Requires raw access permissions on the read columns to execute the DML statements

This error occurs when you attempt a DML DELETE, UPDATE, or MERGE statement, without having the Fine-Grained Reader permission on the scanned columns that use column-level access control to restrict access at the column level. For more information, see Impact on writes from column-level access control.

Invalid credentials for scheduled queries

Error strings:

  • Error code: INVALID_USERID
  • Error code 5: Authentication failure: User Id not found
  • PERMISSION_DENIED: BigQuery: Permission denied while getting Drive credentials

This error can occur when a scheduled query fails due to having outdated credentials, especially when querying Google Drive data.

To address this error, update the scheduled query credentials.

Invalid service account credentials

Error string: HttpError 403 when requesting returned: The caller does not have permission

This error might appear when you attempt to set up a scheduled query with a service account. To resolve this error, see the troubleshooting steps in Authorization and permission issues.

Invalid snapshot time

Error string: Invalid snapshot time

This error can occur when trying to query historical data that is outside of the time travel window for the dataset. To address this error, change the query to access historical data within the dataset's time travel window.

This error can also appear if one of the tables used in the query is dropped and re-created after the query starts. Check to see if there is a scheduled query or application that performs this operation that ran at the same time as the failed query. If there is, try moving the process that performs the drop and re-create operation to run at a time that doesn't conflict with queries that read that table.

Job already exists

Error string: Already Exists: Job <job name>

This error can occur for query jobs that must evaluate large arrays, such that it takes longer than average to create a query job. For example, a query with a WHERE clause like WHERE column IN (<2000+ elements array>).

To address this error, follow these steps:

Job not found

Error string: Job not found

This error can occur in response to a getQueryResults call, where no value is specified for the location field. If that is the case, try the call again and provide a location value.

For more information, see Avoid multiple evaluations of the same Common Table Expressions (CTEs).

Query exceeds execution time limit

Error string: Query fails due to reaching the execution time limit

If your query is hitting the query execution time limit, check the execution time of previous runs of the query by querying the INFORMATION_SCHEMA.JOBS view with a query similar to the following example:

SELECT TIMESTAMP_DIFF(end_time, start_time, SECOND) AS runtime_in_seconds
FROM `region-us`.INFORMATION_SCHEMA.JOBS
WHERE statement_type = 'QUERY'
AND query = "my query string";

If previous runs of the query have taken significantly less time, use query performance insights to determine and address the underlying issue.

Query response is too large

Error string: responseTooLarge

This error occurs when your query's results are larger than the maximum response size.

To address this error, follow the guidance provided for the responseTooLarge error message.

Too many DML statements

Error string: Too many DML statements outstanding against <table-name>, limit is 20

This error occurs when you exceed the limit of 20 DML statements in PENDING status in a queue for a single table. This error usually occurs when you submit DML jobs against a single table faster than what BigQuery can process.

One possible solution is to group multiple smaller DML operations into larger but fewer jobs. When you group smaller jobs into larger ones, the cost to run the larger jobs is amortized and the execution is faster. Consolidating DML statements that affect the same data generally improves the efficiency of DML jobs, and is less likely to exceed the queue size quota limit. For more information about optimizing your DML operations, see DML statements that update or insert single rows.

Other solutions to improve your DML efficiency could be to partition or cluster your tables. For more information, see Best practices.

Resources exceeded issues

The following issues result when BigQuery has insufficient resources to complete your query.

Query exceeds CPU resources

Error string: Query exceeded resource limits

This error occurs when on-demand queries use too much CPU relative to the amount of data scanned. For information on how to resolve these issues, see Troubleshoot resources exceeded issues.

Query exceeds memory resources

Error string: Resources exceeded during query execution: The query could not be executed in the allotted memory

For SELECT statements, this error occurs when the query uses too many resources. To address this error, see Troubleshoot resources exceeded issues.

Query exceeds shuffle resources

Error string: Resources exceeded during query execution: Your project or organization exceeded the maximum disk and memory limit available for shuffle operations

This error occurs when a query can't access sufficient shuffle resources.

To address this error, provision more slots or reduce the amount of data processed by the query. For more information about ways to do this, see Insufficient shuffle quota.

For additional information on how to resolve these issues, see Troubleshoot resources exceeded issues.

Query is too complex

Error string: Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex

This error occurs when a query is too complex. The primary causes of complexity are:

  • WITH clauses that are deeply nested or used repeatedly.
  • Views that are deeply nested or used repeatedly.
  • Repeated use of the UNION ALL operator.

To address this error, try the following options:

  • Split the query into multiple queries, then use procedural language to run those queries in a sequence with shared state.
  • Use temporary tables instead of WITH clauses.
  • Rewrite your query to reduce the number of referenced objects and comparisons.

For additional information on how to resolve these issues, see Troubleshoot resources exceeded issues.

Troubleshoot resources exceeded issues

For query jobs:

To optimize your queries, try the following steps:

  • Try removing an ORDER BY clause.
  • If your query uses JOIN, ensure that the larger table is on the left side of the clause.
  • If your query uses FLATTEN, determine if it's necessary for your use case. For more information, see nested and repeated data.
  • If your query uses EXACT_COUNT_DISTINCT, consider using COUNT(DISTINCT) instead.
  • If your query uses COUNT(DISTINCT <value>, <n>) with a large <n> value, consider using GROUP BY instead. For more information, see COUNT(DISTINCT).
  • If your query uses UNIQUE, consider using GROUP BY instead, or a window function inside of a subselect.
  • If your query materializes many rows using a LIMIT clause, consider filtering on another column, for example ROW_NUMBER(), or removing the LIMIT clause altogether to allow write parallelization.
  • If your query used deeply nested views and a WITH clause, this can cause an exponential growth in complexity, thereby reaching the limits.
  • Don't replace temporary tables with WITH clauses. The clause might have to be recalculated several times, which can make the query complex and therefore slow. Persisting intermediate results in temporary tables instead helps with the complexity
  • Avoid using UNION ALL queries.

For more information, see the following resources:

For load jobs:

If you are loading Avro or Parquet files, reduce the row size in the files. Check for specific size restrictions for the file format that you are loading:

If you get this error when loading ORC files, contact Support.

For Storage API:

Error string: Stream memory usage exceeded

During a Storage Read API ReadRows call, some streams with high memory usage might get a RESOURCE_EXHAUSTED error with this message. This can happen when reading from wide tables or tables with a complex schema. As a resolution, reduce the result row size by selecting fewer columns to read (using the selected_fields parameter), or by simplifying the table schema.

What's next