Error messages
This document describes error messages that you might encounter when working with BigQuery, including HTTP error codes and suggested troubleshooting steps.
For more information about query errors, see Troubleshoot query errors.
For more information about streaming insert errors, see Troubleshoot streaming inserts.
Error table
Responses from the BigQuery API include an HTTP error code and an error object in the response body. An error object is typically one of the following:
- An
errors
object, which contains an array ofErrorProto
objects. - An
errorResults
object, which contains a singleErrorProto
object.
The
Error message column in the following table maps to the reason
property in an
ErrorProto
object.
The table does not include all possible HTTP errors or other networking errors. Therefore, don't assume that an error object is present in every error response from BigQuery. In addition, you might receive different errors or error objects if you use the Cloud Client Libraries for the BigQuery API. For more information, see BigQuery API Client Libraries.
If you receive an HTTP response code that doesn't appear in the following table, the response code
indicates an issue or an expected result with the HTTP request. Response codes in the 5xx
range indicate a server-side error. If you receive a 5xx
response
code, then retry the request later. In some cases, a 5xx
response code might be
returned by an intermediate server such as a proxy. Examine the response body and response headers
for details about the error. For a full list of HTTP response codes, see HTTP response
codes.
If you use the bq command-line tool to
check job status, the error object is not returned by default. To view the
error object and the corresponding reason
property that maps to the following table, use the
--format=prettyjson
flag. For example, bq --format=prettyjson show -j <job
id>
. To view verbose logging for the bq tool, use --apilog=stdout
.
To learn more about troubleshooting the bq tool, see
Debugging.
Error message | HTTP code | Description | Troubleshooting |
---|---|---|---|
accessDenied | 403 | This error returns when you try to access a resource such as a dataset, table, view, or job that you don't have access to. This error also returns when you try to modify a read-only object. | Contact the resource owner and request access to the
resource for the user identified by the principalEmail value in the error's audit log. |
backendError | 500 or 503 | This error returns when there is a temporary server failure such as a network connection problem or a server overload. | In general, wait a few seconds and retry. If issue recurred, retry with exponential backoff.
However, there are two special cases for
troubleshooting this error: jobs.get calls and jobs.insert calls.
If you receive this error when making a |
badRequest | 400 | The error 'UPDATE or DELETE statement over table <project.dataset.table> would
affect rows in the streaming buffer, which is not supported' can occur when some
recently streamed rows in a table might not be available for DML operations (DELETE ,
UPDATE ,MERGE ), typically for a few minutes, but in rare cases, up
to 90 minutes. For more information, see Streaming data availability
and
DML Limitations. |
To see if data is available for table DML operations, check the
tables.get response
for the
streamingBuffer section. If the streamingBuffer section is absent, then table data is
available for DML operations. You can also use the streamingBuffer.oldestEntryTime
field to identify the age of records in the streaming buffer. |
billingNotEnabled | 403 | This error returns when billing isn't enabled for the project. | Enable billing for the project in the Google Cloud console. |
billingTierLimitExceeded | 400 | This error returns when the value of statistics.query.billingTier for an
on-demand Job exceeds 100. This occurs when on-demand queries use too much CPU relative to
the amount of data scanned. For instructions on how to inspect job statistics, see
Managing jobs.
|
This error most often results from executing inefficient cross-joins, either explicitly or implicitly, for example due to an inexact join condition. These types of queries are not suitable for on-demand pricing due to high resource consumption, and in general they may not scale well. You can either optimize the query or switch to use the capacity-based (slots) pricing model to resolve this error. For information about optimizing queries, see Avoiding SQL anti-patterns. |
blocked | 403 | This error returns when BigQuery has temporarily denylisted the operation you attempted to perform, usually to prevent a service outage. | Contact support for more information. |
duplicate | 409 | This error returns when trying to create a job, dataset, or table that already exists. The
error also returns when a job's writeDisposition property is set to
WRITE_EMPTY and the destination table accessed by the job already exists. |
Rename the resource you're trying to create, or change the writeDisposition
value in the job. |
internalError | 500 | This error returns when an internal error occurs within BigQuery. | Wait according to the back-off requirements described in the BigQuery Service Level Agreement, then try the operation again. If the error continues to occur, contact support or file a bug using the BigQuery issue tracker. You can also reduce the frequency of this error by using Reservations. |
invalid | 400 |
This error returns when there is any type of invalid input other than an invalid query, such
as missing required fields or an invalid table schema. Invalid queries return an
invalidQuery error.
|
|
invalidQuery | 400 | This error returns when you attempt to run an invalid query. | Check your query for syntax errors. The query reference contains descriptions and examples of how to construct valid queries. |
invalidUser | 400 | This error returns when you attempt to schedule a query with invalid user credentials. | Refresh the user credentials, as explained in Scheduling queries. |
jobBackendError | 400 | This error returns when the job was created successfully, but failed with an internal
error. You may see this error in jobs.query or
jobs.getQueryResults . |
Retry the job with a new jobId . If the error continues to occur, contact
support. |
jobInternalError | 400 | This error returns when the job was created successfully, but failed with an internal
error. You may see this error in jobs.query or
jobs.getQueryResults . |
Retry the job with a new jobId . If the error continues to occur, contact
support. |
notFound | 404 | This error returns when you refer to a resource (a dataset, a table, or a job) that doesn't exist, or when the location in the request does not match the location of the resource (for example, the location in which a job is running). This can also occur when using table decorators to refer to deleted tables that have recently been streamed to. | Fix the resource names, correctly specify the location, or wait at least 6 hours after streaming before querying a deleted table. |
notImplemented | 501 | This job error returns when you try to access a feature that isn't implemented. | Contact support for more information. |
proxyAuthenticationRequired | 407 | This error returns between the client environment and the proxy server when the request lacks valid authentication credentials for the proxy server. For more information, see 407 Proxy Authentication Required. | Troubleshooting is specific to your environment. If you receive this error while
working in Java, ensure you have set both the
jdk.http.auth.tunneling.disabledSchemes= and
jdk.http.auth.proxying.disabledSchemes= properties
with no value following the equal sign. |
quotaExceeded | 403 | This error returns when your project exceeds a BigQuery quota, a custom quota, or when you haven't set up billing and you have exceeded the free tier for queries. | View the message property of the error object for more information about
which quota was exceeded. To reset or raise a BigQuery quota,
contact support.
To modify a custom quota, submit a request from the
Google Cloud console
page. If you receive this error using the BigQuery sandbox, you can
upgrade from the sandbox.
For more information, see Troubleshooting BigQuery quota errors. |
rateLimitExceeded | 403 | This error returns if your project exceeds a short-term rate limit by sending too many requests too quickly. For example, see the rate limits for query jobs and rate limits for API requests. |
Slow down the request rate.
If you believe that your project did not exceed one of these limits, contact support. For more information, see Troubleshooting BigQuery quota errors. |
resourceInUse | 400 | This error returns when you try to delete a dataset that contains tables or when you try to delete a job that is currently running. | Empty the dataset before attempting to delete it, or wait for a job to complete before deleting it. |
resourcesExceeded | 400 | This error returns when your job uses too many resources. | This error returns when your job uses too many resources. For troubleshooting information, see Troubleshoot resources exceeded errors. |
responseTooLarge | 403 | This error returns when your query's results are larger than the
maximum response size. Some queries execute in
multiple stages, and this error returns when any stage returns a response size that is too
large, even if the final result is smaller than the maximum. This error commonly returns
when queries use an ORDER BY clause. |
Adding a LIMIT clause can sometimes help, or removing the ORDER BY
clause. If you want to ensure that large results can return, you can set the
allowLargeResults property to true and specify a destination
table. For more information, see
Writing large query results. |
stopped | 200 | This status code returns when a job is canceled. | |
tableUnavailable | 400 | Certain BigQuery tables are backed by data managed by other Google product teams. This error indicates that one of these tables is unavailable. | When you encounter this error message, you can retry your request (see internalError troubleshooting suggestions) or contact the Google product team that granted you access to their data. |
timeout | 400 | The job timed out. | Consider reducing the amount of work performed by your operation so that it can complete within the set limit. See Quotas and Limits. |
Sample error response
GET https://bigquery.googleapis.com/bigquery/v2/projects/12345/datasets/foo Response: [404] { "error": { "errors": [ { "domain": "global", "reason": "notFound", "message": "Not Found: Dataset myproject:foo" }], "code": 404, "message": "Not Found: Dataset myproject:foo" } }
Authentication errors
Errors thrown by the OAuth token generation system return the following JSON object, as defined by the OAuth2 specification.
{"error" : "description_string"}
The error is accompanied by either an HTTP 400
Bad Request error or an HTTP
401
Unauthorized error. description_string
is one of the
error codes defined by the OAuth2 specification. For example:
{"error":"invalid_client"}
Review errors
You can use the logs explorer to view authentication errors for specific jobs, users, or other scopes. Below are several examples of logs explorer filters that you can use to review authentication errors.
- Search for failed jobs with permission issues in the Policy Denied audit logs:
resource.type="bigquery_resource" protoPayload.status.message=~"Access Denied" logName="projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Fdata_access"
- Search for a specific user or service account used for authentication:
resource.type="bigquery_resource" protoPayload.authenticationInfo.principalEmail="EMAIL"
Replace
EMAIL
with the email address of the user or service account.- Search for IAM policy changes in the Admin Activity audit logs:
protoPayload.methodName=~"SetIamPolicy" logName="projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Factivity"
- Search for changes to a specific BigQuery dataset in the Data Access audit logs:
resource.type="bigquery_resource" protoPayload.resourceName="projects/PROJECT_ID/datasets/DATASET_ID" logName=projects/PROJECT_ID/logs/cloudaudit.googleapis.com%2Fdata_access
Replace the following:
PROJECT_ID
: the ID of the project containing the resourceDATASET_ID
: the ID of the dataset containing the resource
Connectivity error messages
The following table lists error messages that you might see because of intermittent connectivity issues when using the client libraries or calling the BigQuery API from your code:
Error message | Client library or API | Troubleshooting |
---|---|---|
com.google.cloud.bigquery.BigQueryException: Read timed out | Java | Set a larger timeout value. |
Connection has been shutdown: javax.net.ssl.SSLException: java.net.SocketException: Connection reset at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:115) | Java | Implement a retry mechanism and set a larger timeout value. |
javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake | Java | Implement a retry mechanism and set a larger timeout value. |
Connection aborted. RemoteDisconnected('Remote end closed connection without response' | Python | Set a larger timeout value. |
TaskCanceledException: A task was canceled | .NET library | Increase the timeout value on the client side. |
Google Cloud console error messages
The following table lists error messages that you might see while you work in the Google Cloud console.
Error message | Description | Troubleshooting |
---|---|---|
Unknown error response from the server. | This error displays when the Google Cloud console receives an unknown error from the server; for example, when you click a dataset or other type of link, and the page cannot be displayed. | Switch to your browser's incognito, or private, mode and repeating the action that resulted in the error. If no error results in incognito mode, then the error might be due to a browser extension, such as an ad blocker. Disable your browser extensions while not in incognito mode, and see if that resolves the issue. |