Troubleshoot query issues
This document is intended to help you troubleshoot common issues related to running queries, such as identifying reasons for slow queries, or providing resolution steps for common errors returned by failed queries.
Troubleshoot slow queries
When troubleshooting slow query performance, consider the following common causes:
Check the Google Cloud Service Health page for known BigQuery service outages that might impact query performance.
Review the job timeline for your query on the job details page to see how long each stage of the query took to run.
If most of the elapsed time was due to long creation times, contact Cloud Customer Care for assistance.
If most of the elapsed time was due to long execution times, then review your query performance insights. Query performance insights can inform you if your query ran longer than the average execution time, and suggest possible causes. Possible causes might include query slot contention or an insufficient shuffle quota. For more information about each query performance issue and possible resolutions, see Interpret query performance insights.
Review the bytes processed in the query job details page to see if it is higher than expected. You can do this by comparing the number of bytes processed by the current query with another query job that completed in an acceptable amount of time. If there is a large discrepancy of bytes processed between the two queries, then perhaps the query was slow due to a large data volume. For information on optimizing your queries to handle large data volumes, see Optimize query computation.
You can also identify queries in your project that process a large amount of data by searching for the most expensive queries using the
INFORMATION_SCHEMA.JOBS
view.
If you still cannot find the reason to explain slower than expected query performance, contact Cloud Customer Care for assistance.
Avro schema resolution
Error string: Cannot skip stream
This error can occur when loading multiple Avro files with different schemas, resulting in a schema resolution issue and causing the import job to fail at a random file.
To address this error, ensure that the last alphabetical file in the load job contains the superset (union) of the differing schemas. This is a requirement based on how Avro handles schema resolution.
Conflicting concurrent queries
Error string: Concurrent jobs in the same session are not allowed
This error can occur when multiple queries are running concurrently in a session, which is not supported. See session limitations.
Conflicting DML statements
Error string: Could not serialize access to table due to concurrent update
This error can occur when mutating data manipulation language (DML) statements that are running concurrently on the same table conflict with each other, or when the table is truncated during a mutating DML statement. For more information, see DML statement conflicts.
To address this error, run DML operations that affect a single table such that they don't overlap.
Correlated subqueries
Error string: Correlated subqueries that reference other tables are not
supported unless they can be de-correlated
This error can occur when your query contains a subquery that references a column from outside that subquery, called a correlation column. The correlated subquery is evaluated using an inefficient, nested execution strategy, in which the subquery is evaluated for every row from the outer query that produces the correlation columns. Sometimes, BigQuery can internally rewrite queries with correlated subqueries so that they execute more efficiently. The correlated subqueries error occurs when BigQuery can't sufficiently optimize the query.
To address this error, try the following:
- Remove any
ORDER BY
,LIMIT
,EXISTS
,NOT EXISTS
, orIN
clauses from your subquery. - Use a multi-statement query to create a temporary table to reference in your subquery.
- Rewrite your query to use a
CROSS JOIN
instead.
Insufficient column-level access control permissions
Error string: Requires raw access permissions on the read columns to execute the DML statements
This error occurs when you attempt a DML DELETE
, UPDATE
, or MERGE
statement, without having the Fine-Grained Reader permission on the scanned
columns that use column-level access control to restrict access at the column
level. For more information, see
Impact on writes from column-level access control.
Invalid credentials for scheduled queries
Error strings:
Error code: INVALID_USERID
Error code 5: Authentication failure: User Id not found
PERMISSION_DENIED: BigQuery: Permission denied while getting Drive credentials
This error can occur when a scheduled query fails due to having outdated credentials, especially when querying Google Drive data.
To address this error, follow these steps:
- Ensure that you've enabled the BigQuery Data Transfer Service, which is a prerequisite for using scheduled queries.
- Update the scheduled query credentials.
Invalid service account credentials
Error string: HttpError 403 when requesting returned: The caller does not have permission
This error might appear when you attempt to set up a scheduled query with a service account. To resolve this error, see the troubleshooting steps in Authorization and permission issues.
Invalid snapshot time
Error string: Invalid snapshot time
This error can occur when trying to query historical data that is outside of the time travel window for the dataset. To address this error, change the query to access historical data within the dataset's time travel window.
This error can also appear if one of the tables used in the query is dropped and re-created after the query starts. Check to see if there is a scheduled query or application that performs this operation that ran at the same time as the failed query. If there is, try moving the process that performs the drop and re-create operation to run at a time that doesn't conflict with queries that read that table.
Job already exists
Error string: Already Exists: Job <job name>
This error can occur for query jobs that must evaluate large arrays, such that
it takes longer than average to create a query job. For example, a
query with a WHERE
clause like WHERE column IN (<2000+ elements array>)
.
To address this error, follow these steps:
- Allow BigQuery to generate a random
jobId
value instead of specifying one. - Use a parameterized query to load the array.
Job not found
Error string: Job not found
This error can occur in response to a
getQueryResults
call,
where no value is specified for the location
field. If that is the case,
try the call again and provide a location
value.
For more information, see Avoid multiple evaluations of the same Common Table Expressions (CTEs).
Location not found
Error string: Dataset [project_id]:[dataset_id] was not found in location [region]
This error returns when you refer to a dataset resource that doesn't exist, or when the location in the request does not match the location of the dataset.
To address this issue, specify the location of the dataset in the query or confirm that the dataset is available in the same location.
Query exceeds execution time limit
Error string: Query fails due to reaching the execution time limit
If your query is hitting the
query execution time limit, check the
execution time of previous runs of the query by querying the
INFORMATION_SCHEMA.JOBS
view
with a query similar to the following example:
SELECT TIMESTAMP_DIFF(end_time, start_time, SECOND) AS runtime_in_seconds FROM `region-us`.INFORMATION_SCHEMA.JOBS WHERE statement_type = 'QUERY' AND query = "my query string";
If previous runs of the query have taken significantly less time, use query performance insights to determine and address the underlying issue.
Query response is too large
Error string: responseTooLarge
This error occurs when your query's results are larger than the maximum response size.
To address this error, follow the guidance provided for the
responseTooLarge
error message.
Too many DML statements
Error string: Too many DML statements outstanding against <table-name>, limit is 20
This error occurs when you exceed the limit of 20 DML statements
in PENDING
status in a queue for a single table. This error usually occurs
when you submit DML jobs against a single table faster than what BigQuery
can process.
One possible solution is to group multiple smaller DML operations into larger but fewer jobs—for example, by batching updates and inserts. When you group smaller jobs into larger ones, the cost to run the larger jobs is amortized and the execution is faster. Consolidating DML statements that affect the same data generally improves the efficiency of DML jobs, and is less likely to exceed the queue size quota limit. For more information about optimizing your DML operations, see Avoid DML statements that update or insert single rows.
Other solutions to improve your DML efficiency could be to partition or cluster your tables. For more information, see Best practices.
User does not have permission
Error strings:
Access Denied: Project [project_id]: User does not have bigquery.jobs.create permission in project [project_id].
User does not have permission to query table project-id:dataset.table.
This error occurs when you run a query without the
bigquery.jobs.create
permission on the project from which you are running the
query, regardless of your permissions on the project that contains the data. You
must also have the bigquery.tables.getData
permission on
all tables and views that your query references.
This error can also occur if the table does not exist in the queried region,
such as asia-south1
. To query views, you also need this permission on all
underlying tables and views. For more information about required permissions,
see Run a query.
When addressing this error, consider the following:
Service accounts: Service accounts must have the
bigquery.jobs.create
permission on the project from which they run.Custom roles: Custom IAM roles must have the
bigquery.jobs.create
permission explicitly included in the relevant role.Shared datasets: When working with shared datasets in a separate project, you might still need the
bigquery.jobs.create
permission in the project to run queries or jobs in that dataset.
To give permission to access the table
To give permission to access a table to a principal, follow these steps:
Go to the BigQuery page.
In Explorer, browse to the table you need to access, select
View actions, select Share, and then click Manage Permissions.In Add principals, enter the name of the users, groups, domains, or service accounts you want to add.
In Assign roles, select the
bigquery.jobs.create
permission. As an alternative, granting theroles/bigquery.jobUser
role in the project from which the query is made provides the necessary permissions.Click Save.
Resources exceeded issues
The following issues result when BigQuery has insufficient resources to complete your query.
Query exceeds CPU resources
Error string: Query exceeded resource limits
This error occurs when on-demand queries use too much CPU relative to the amount of data scanned. For information on how to resolve these issues, see Troubleshoot resources exceeded issues.
Query exceeds memory resources
Error string: Resources exceeded during query execution: The query could not be executed in the allotted memory
For
SELECT
statements,
this error occurs when the query uses too many resources.
To address this error, see Troubleshoot
resources exceeded issues.
Query exceeds shuffle resources
Error string: Resources exceeded during query execution: Your project or organization exceeded the maximum disk and memory limit available for shuffle operations
This error occurs when a query can't access sufficient shuffle resources.
To address this error, provision more slots or reduce the amount of data processed by the query. For more information about ways to do this, see Insufficient shuffle quota.
For additional information on how to resolve these issues, see Troubleshoot resources exceeded issues.
Query is too complex
Error string: Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex
This error occurs when a query is too complex. The primary causes of complexity are:
WITH
clauses that are deeply nested or used repeatedly.- Views that are deeply nested or used repeatedly.
- Repeated use of the
UNION ALL
operator.
To address this error, try the following options:
- Split the query into multiple queries, then use procedural language to run those queries in a sequence with shared state.
- Use temporary tables instead of
WITH
clauses. - Rewrite your query to reduce the number of referenced objects and comparisons.
You can proactively monitor queries that are approaching the complexity limit
by using the query_info.resource_warning
field in the
INFORMATION_SCHEMA.JOBS
view.
The following example returns queries with high resource usage for the
last three days:
SELECT
ANY_VALUE(query) AS query,
MAX(query_info.resource_warning) AS resource_warning
FROM
<your_project_id>.`region-us`.INFORMATION_SCHEMA.JOBS
WHERE
creation_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 3 DAY)
AND query_info.resource_warning IS NOT NULL
GROUP BY
query_info.query_hashes.normalized_literals
LIMIT
1000
For additional information on how to resolve these issues, see Troubleshoot resources exceeded issues.
Troubleshoot resources exceeded issues
For query jobs:
To optimize your queries, try the following steps:
- Try removing an
ORDER BY
clause. - If your query uses
JOIN
, ensure that the larger table is on the left side of the clause. - If your query uses
FLATTEN
, determine if it's necessary for your use case. For more information, see nested and repeated data. - If your query uses
EXACT_COUNT_DISTINCT
, consider usingCOUNT(DISTINCT)
instead. - If your query uses
COUNT(DISTINCT <value>, <n>)
with a large<n>
value, consider usingGROUP BY
instead. For more information, seeCOUNT(DISTINCT)
. - If your query uses
UNIQUE
, consider usingGROUP BY
instead, or a window function inside of a subselect. - If your query materializes many rows using a
LIMIT
clause, consider filtering on another column, for exampleROW_NUMBER()
, or removing theLIMIT
clause altogether to allow write parallelization. - If your query used deeply nested views and a
WITH
clause, this can cause an exponential growth in complexity, thereby reaching the limits. - Don't replace temporary tables with
WITH
clauses. The clause might have to be recalculated several times, which can make the query complex and therefore slow. Persisting intermediate results in temporary tables instead helps with the complexity - Avoid using
UNION ALL
queries.
For more information, see the following resources:
- Optimize query computation.
- Get more details about the resource warning
- Monitor health, resource utilization, and jobs
For load jobs:
If you are loading Avro or Parquet files, reduce the row size in the files. Check for specific size restrictions for the file format that you are loading:
If you get this error when loading ORC files, contact Support.
For Storage API:
Error string: Stream memory usage exceeded
During a Storage Read API ReadRows
call, some streams with high
memory usage might get a RESOURCE_EXHAUSTED
error with this message.
This can happen when reading from wide tables or tables with a complex schema. As a resolution,
reduce the result row size by selecting fewer columns to
read (using the
selected_fields
parameter),
or by simplifying the table schema.
Troubleshoot connectivity problems
The following sections describe how to troubleshoot connectivity issues when trying to interact with BigQuery:
Allowlist Google DNS
Use the Google IP Dig tool to resolve the BigQuery DNS endpoint bigquery.googleapis.com
to a single 'A' record IP. Make sure this IP is not blocked in your firewall settings.
In general we recommend allowlisting Google DNS names. The IP ranges shared in the https://www.gstatic.com/ipranges/goog.json and https://www.gstatic.com/ipranges/cloud.json files change often; therefore, we recommended allowlisting Google DNS names instead. Here is a list of common DNS names we recommend to add to the allowlist:
*.1e100.net
*.google.com
*.gstatic.com
*.googleapis.com
*.googleusercontent.com
*.appspot.com
*.gvt1.com
Identify the proxy or firewall dropping packets
To identify all packet hops between the client and the Google Front End (GFE) run a traceroute
command on your client machine that could highlight the server that is dropping packets directed towards the GFE. Here is a sample traceroute
command:
traceroute -T -p 443 bigquery.googleapis.com
It is also possible to identify packet hops for specific GFE IP addresses if the problem is related to a particular IP address:
traceroute -T -p 443 142.250.178.138
If there's a Google-side timeout issue, you'll see the request make it all the way to the GFE.
If you see that the packets never reach the GFE, reach out to your network administrator to resolve this problem.
Generate a PCAP file and analyze your firewall or proxy
Generate a packet capture file (PCAP) and analyze the file to make sure the firewall or proxy is not filtering out packets to Google IPs and is allowing packets to reach the GFE.
Here is a sample command that can be run with the tcpdump
tool:
tcpdump -s 0 -w debug.pcap -K -n host bigquery.googleapis.com
Set up retries for intermittent connectivity problems
There are situations in which GFE load balancers might drop connections from a client IP - for example, if it detects DDOS traffic patterns, or if the load balancer instance is being scaled down which may result in the endpoint IP being recycled. If the GFE load balancers drop the connection, the client needs to catch the timed-out request and retry the request to the DNS endpoint. Ensure that you don't use the same IP address until the request eventually succeeds, because the IP address may have changed.
If you've identified an issue with consistent Google-side timeouts where retries don't help, contact Cloud Customer Care and make sure to include a fresh PCAP file generated by running a packet capturing tool like tcpdump.
What's next
- Get query performance insights.
- Learn more about optimizing queries for performance.
- Review quotas and limits for queries.
- Learn more about other BigQuery error messages.