This page describes various error codes and error messages you might encounter when working with
BigQuery, including HTTP error codes, job errors, and
Google Cloud Console error messages. Job errors are represented in the
status
object
when calling jobs.get.
Error table
The following table lists error codes that BigQuery might return. Responses from
the BigQuery API include an HTTP error code and an errors object in the response body. The
"Error code" column below maps to the reason
property in the error object. This table
does not include all possible HTTP errors or other networking errors. Therefore, don't assume that
an errors
object is present in every error response from BigQuery.
In addition, you might receive different errors or error objects if you use the Cloud Client Libraries for the BigQuery API. For more information, see BigQuery API Client Libraries.
If you receive an HTTP response code that doesn't appear in the list below, the response code
indicates an issue or an expected result with the HTTP request. Response codes in the
5xx
range indicate a server-side error. If you receive a 5xx
response
code, then retry the request later. In some cases, a 5xx
response code might be
returned by an intermediate server such as a proxy. Examine the response body and response headers
for details about the error. For a full list of HTTP response codes, see HTTP response
codes.
If you use the bq
command-line tool to
check job status, the error object is not returned by default. To view the object and the
corresponding reason
property that maps to the following table, use the --
format=prettyjson
flag. For example, bq --format=prettyjson show -j <job
id>
Error code | HTTP code | Description | Troubleshooting |
---|---|---|---|
accessDenied | 403 | This error returns when you try to access a resource such as a dataset, table, view, or job that you don't have access to. This error also returns when you try to modify a read-only object. | Contact the resource owner and ask for access to the resource. |
backendError | 500 or 503 | This error returns when there is a temporary server failure such as a network connection problem or a server overload. | In general, wait a few seconds and try again. However, there are two special cases for
troubleshooting this error: jobs.get calls and jobs.insert calls.
If you receive this error when making a |
billingNotEnabled | 403 | This error returns when billing isn't enabled for the project. | Enable billing for the project in the Google Cloud Console. |
blocked | 403 | This error returns when BigQuery has temporarily denylisted the operation you attempted to perform, usually to prevent a service outage. This error rarely occurs. | Contact support for more information. |
duplicate | 409 | This error returns when trying to create a job, dataset, or table that already exists. The
error also returns when a job's writeDisposition property is set to
WRITE_EMPTY and the destination table accessed by the job already exists. |
Rename the resource you're trying to create, or change the writeDisposition
value in the job. |
internalError | 500 | This error returns when an internal error occurs within BigQuery. | Wait according to the back-off requirements described in the BigQuery Service Level Agreement, then try the operation again. If the error continues to occur, contact support or file a bug using the BigQuery issue tracker. |
invalid | 400 |
This error returns when there is any kind of invalid input other than an invalid query, such
as missing required fields or an invalid table schema. Invalid queries return an
invalidQuery error instead.
|
|
invalidQuery | 400 | This error returns when you attempt to run an invalid query. | Double check your query for syntax errors. The query reference contains descriptions and examples of how to construct valid queries. |
invalidUser | 400 | This error returns when you attempt to schedule a query with invalid user credentials. | Refresh the user credentials, as explained in Scheduling queries. |
notFound | 404 | This error returns when you refer to a resource (a dataset, a table, or a job) that doesn't exist. This can also occur when using snapshot decorators to refer to deleted tables that have recently been streamed to. | Fix the resource names, or wait at least 6 hours after streaming before querying a deleted table. |
notImplemented | 501 | This job error returns when you try to access a feature that isn't implemented. | Contact support for more information. |
quotaExceeded | 403 | This error returns when your project exceeds a BigQuery quota, a custom quota, or when you haven't set up billing and you have exceeded the free tier for queries. | View the message property of the error object for more information about
which quota was exceeded. To reset or raise a BigQuery quota,
contact support.
To modify a custom quota, submit a request from the
Google Cloud Console
page. If you receive this error using the BigQuery sandbox, you can
upgrade.
For more information, see Troubleshooting BigQuery quota errors. |
rateLimitExceeded | 403 | This error returns if your project exceeds a short-term rate limit by sending too many requests too quickly. For example, see the rate limits for query jobs and rate limits for API requests. |
Slow down the request rate.
If you believe that your project did not exceed one of these limits, contact support. For more information, see Troubleshooting BigQuery quota errors. |
resourceInUse | 400 | This error returns when you try to delete a dataset that contains tables or when you try to delete a job that is currently running. | Empty the dataset before attempting to delete it, or wait for a job to complete before deleting it. |
resourcesExceeded | 400 | This error returns when your query uses too many resources. |
|
responseTooLarge | 403 | This error returns when your query's results are larger than the
maximum response size. Some queries execute in
multiple stages, and this error returns when any stage returns a response size that is too
large, even if the final result is smaller than the maximum. This error commonly returns
when queries use an ORDER BY clause. |
Adding a LIMIT clause can sometimes help, or removing the ORDER BY
clause. If you want to ensure that large results can return, you can set the
allowLargeResults property to true and specify a destination table. |
stopped | 200 | This status code returns when a job is canceled. | |
tableUnavailable | 400 | Certain BigQuery tables are backed by data managed by other Google product teams. This error indicates that one of these tables is unavailable. | When you encounter this error message, you can retry your request (see internalError troubleshooting suggestions) or contact the Google product team that granted you access to their data. |
timeout | 400 | The job timed out. | Consider reducing the amount of work performed by your operation so that it can complete within the set limit. See Quotas and Limits. |
Sample error response
GET https://bigquery.googleapis.com/bigquery/v2/projects/12345/datasets/foo Response: [404] { "error": { "errors": [ { "domain": "global", "reason": "notFound", "message": "Not Found: Dataset myproject:foo" }], "code": 404, "message": "Not Found: Dataset myproject:foo" } }
Authentication errors
Errors thrown by the OAuth token generation system return the following JSON object, as defined by the OAuth2 specification.
{"error" : "description_string"}
The error is accompanied by either an HTTP 400
Bad Request error or an HTTP
401
Unauthorized error. description_string
is one of the
error codes defined by the OAuth2 specification. For example:
{"error":"invalid_client"}
Troubleshooting streaming inserts
The following sections discuss how to troubleshoot errors that occur when you stream data into BigQuery.
Failure HTTP response codes
If you receive a failure HTTP response code such as a network error, there's
no way to tell whether the streaming insert succeeded. If you try to simply re-send
the request, you might end up with duplicated rows in your table. To help
protect your table against duplication, set the insertId
property when
sending your request. BigQuery uses the insertId
property
for de-duplication.
If you receive a permission error, an invalid table name error, or an exceeded quota error, no rows are inserted and the entire request fails.
Success HTTP response codes
Even if you receive a success HTTP response code, you'll need to check the
insertErrors
property of the response to determine whether the row insertions
were successful because it's possible that BigQuery was only partially
successful at inserting the rows.
If the insertErrors
property is an empty list, all of the rows inserted
successfully. Otherwise, except in cases where there was a schema mismatch
in any of the rows, rows indicated in the insertErrors
property were
not inserted, and all other rows were inserted successfully. The errors
property contains detailed information about why each unsuccessful row failed.
The index
property indicates the 0-based row index of the request
that the error applies to.
If BigQuery encounters a schema mismatch on individual rows in the request,
none of the rows are inserted and an insertErrors
entry is returned
for each row, even the rows that did not have a schema mismatch. Rows that did
not have a schema mismatch will have an error with the reason
property set to stopped
, and can be re-sent as-is. Rows that failed
include detailed information about the schema mismatch.
Metadata errors for streaming inserts
Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata exhibit are eventually consistent when interacting with the streaming system. Most of the time, metadata changes are propagated within minutes, but during this period API responses might reflect the inconsistent state of the table.
Some scenarios include:
- Schema Changes. Modifying the schema of a table that has recently received streaming inserts can cause responses with schema mismatch errors because the streaming system might not immediately pick up the schema change.
- Table Creation/Deletion. Streaming to a nonexistent table returns a variation of a
notFound
response. A table created in response might not immediately be recognized by subsequent streaming inserts. Similarly, deleting or recreating a table can create a period of time where streaming inserts are effectively delivered to the old table. The streaming inserts might not be present in the new table. - Table Truncation. Truncating a table's data (by using a query job that uses writeDisposition of WRITE_TRUNCATE) can similarly cause subsequent inserts during the consistency period to be dropped.
Missing/Unavailable data
Streaming inserts reside temporarily in the streaming buffer, which has different availability
characteristics than managed storage. Certain operations in BigQuery do not interact
with the streaming buffer, such as table copy jobs and API methods like tabledata.list
.
Recent streaming data won't be present in the destination table or output.
Cloud Console error messages
The following table lists error messages that you might see while you work in the Cloud Console.
Error message | Description | Troubleshooting |
---|---|---|
Unknown error response from the server. | This error displays when the Cloud Console receives an unknown error from the server; for example, when you click a dataset or other type of link, and the page cannot be displayed. | Try switching to your browser's incognito, or private, mode and repeating the action that resulted in the error. If no error results in incognito mode, then the error might be due to a browser extension, such as an ad blocker. Try disabling your browser extensions while not in incognito mode, and see if that resolves the issue. |