Troubleshoot query errors

This document is intended to help you troubleshoot the most common errors returned by failed queries.

Already Exists: Job <job name>

This error can occur for query jobs that must evaluate large arrays, such that it takes longer than average for the query job to get created. For example, a query with a WHERE clause like WHERE column IN (<2000+ elements array>).

To address this error, take the following steps:

Cannot skip stream

This error can occur when loading multiple Avro files with different schemas, resulting in a schema resolution issue and causing the import job to fail at a random file.

To address this error, ensure that the last alphabetical file in the load job contains the superset (union) of the differing schemas. This is a requirement based on how Avro handles schema resolution.

Concurrent jobs in the same session are not allowed

This error can occur when multiple queries are running concurrently in a session, which is not supported. See session limitations.

Could not serialize access to table due to concurrent update

This error can occur when mutating data manipulation language (DML) statements that are running concurrently on the same table conflict with each other, or when the table is truncated during a mutating DML statement. For more information, see DML statement conflicts.

To address this error, run DML operations that affect a single table such that they don't overlap.

Error code: INVALID_USERID

This error can occur when a scheduled query fails due to having outdated credentials, especially when querying Google Drive data.

To address this error, update the scheduled query credentials.

Error code 5: Authentication failure: User Id not found

This error has the same cause and resolution as Error code: INVALID_USERID.

Invalid snapshot time

This error can occur when trying to query historical data that is outside of the time travel window for the dataset. To address this error, change the query to access historical data within the dataset's time travel window.

This error can also appear if one of the tables used in the query is dropped and re-created after the query starts. Check to see if there is a scheduled query or application that performs this operation that ran at the same time as the failed query. If there is, try moving the process that performs the drop and re-create operation to run at a time that doesn't conflict with queries that read that table.

Job not found

This error can occur in response to a getQueryResults call, where no value is specified for the location field. If that is the case, try the call again and provide a location value.

For more information, see Avoid multiple evaluations of the same Common Table Expressions (CTEs).

PERMISSION_DENIED: BigQuery: Permission denied while getting Drive credentials

This error has the same cause and resolution as Error code: INVALID_USERID.

Query exceeded resource limits

This error occurs when on-demand queries use too much CPU relative to the amount of data scanned. If your query returns this error, follow the guidance provided for the billingTierLimitExceeded error message.

Query fails due to reaching the execution time limit

If your query is hitting the query execution time limit, check the execution time of previous runs of the query by querying the INFORMATION_SCHEMA.JOBS view with a query similar to the following example:

SELECT TIMESTAMP_DIFF(end_time, start_time, SECOND) AS runtime_in_seconds
FROM `region-us`.INFORMATION_SCHEMA.JOBS
WHERE statement_type = 'QUERY'
AND query = "my query string";

If previous runs of the query have taken significantly less time, use query performance insights to determine and address the underlying issue.

Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex

This error occurs when a query is overly complex. The primary causes of complexity are:

  • WITH clauses that are deeply nested or used repeatedly.
  • Views that are deeply nested or used repeatedly.
  • Repeated use of the UNION ALL operator.

To address this error, try the following options:

  • Split the query into multiple queries, then use procedural language to run those queries in a sequence with shared state.
  • Use temporary tables instead of WITH clauses.
  • Rewrite your query to reduce the number of referenced objects and comparisons.

Resources exceeded during query execution: The query could not be executed in the allotted memory

For SELECT statements, this error occurs when the query uses too many resources. To address this error, follow the guidance provided for the resourcesExceeded error message.

Resources exceeded during query execution: Your project or organization exceeded the maximum disk and memory limit available for shuffle operations

This error occurs when a query can't access sufficient shuffle resources.

To address this error, provision more slots or reduce the amount of data processed by the query. For more information about ways to do this, see Insufficient shuffle quota.

responseTooLarge

This error occurs when your query's results are larger than the maximum response size.

To address this error, follow the guidance provided for the responseTooLarge error message.

Too many DML statements outstanding against <table-name>, limit is 20

This error occurs when you exceed the limit of 20 DML statements in PENDING status in a queue for a single table. This error usually occurs when you submit DML jobs against a single table faster than what BigQuery can process.

One possible solution is to group multiple smaller DML operations into larger but fewer jobs. When you group smaller jobs into larger ones, the cost to run the larger jobs is amortized and the execution is faster. Consolidating DML statements that affect the same data generally improves the efficiency of DML jobs, and is less likely to exceed the queue size quota limit. For more information about optimizing your DML operations, see DML statements that update or insert single rows.

Other solutions to improve your DML efficiency could be to partition or cluster your tables. For more information, see Best practices.

What's next