Troubleshooting Errors

You'll encounter two types of errors when working with BigQuery: HTTP error codes or job errors. Job errors are represented in the status object when calling jobs.get().

Error table

The following table lists possible error codes that return when making a request to the BigQuery API. API responses include an HTTP error code and an errors object. The "Error code" column below maps to the reason property in the error object.

If you use the bq command-line tool to check job status, the error object is not returned by default. To view the object and the corresponding reason property that maps to the below table, use the --format=prettyjson flag. For example, bq --format=prettyjson show -j <job id>

If you receive an HTTP response code that doesn't appear in the list below, the response code indicates an issue or an expected result with the HTTP request. For example, a 502 error indicates there is an issue with your network connection. For a full list of HTTP response codes, see HTTP response codes.

Error code HTTP code Description Troubleshooting
accessDenied 403 This error returns when you try to access a resource such as a table dataset or job that you don't have access to. This error also returns when you try to modify a read-only object. Contact the resource owner and ask for access to the resource.
backendError 500 or 503 This error returns when there is a temporary server failure such as a network connection problem or a server overload. In general, wait a few seconds and try again. However, there are two special cases for troubleshooting this error: jobs.get() calls and jobs.insert() calls.

jobs.get() calls

  • If you received a 503 error when polling jobs.get(), wait a few seconds and poll again.
  • If the job completes but includes an error object that contains backendError, the job failed. You can safely retry the job without concerns about data consistency.

jobs.insert() calls

If you receive this error when making a jobs.insert() call, it's unclear if the job succeeded or not. In this situation, you'll need to retry the job.

billingNotEnabled 403 This error returns when billing isn't enabled for the project. Enable billing for the project in the Google Cloud Platform Console.
blocked 403 This error returns when BigQuery has temporarily blacklisted the operation you attempted to perform, usually to prevent a service outage. This error rarely occurs. Contact support for more information.
duplicate 409 This error returns when trying to create a job, dataset or table that already exists. The error also returns when a job's writeDisposition property is set to WRITE_EMPTY and the destination table accessed by the job already exists. Rename the resource you're trying to create, or change the writeDisposition value in the job.
internalError 500 This error returns when an internal error occurs within BigQuery. Contact support or file a bug using the BigQuery issue tracker.
invalid 400 This error returns when there is any kind of invalid input other than an invalid query, such as missing required fields or an invalid table schema. Invalid queries return an invalidQuery error instead.
invalidQuery 400 This error returns when you attempt to run an invalid query. Double check your query for syntax errors. The query reference contains descriptions and examples of how to construct valid queries.
notFound 404 This error returns when you refer to a resource (a dataset, a table, or a job) that doesn't exist. This may also occur when using snapshot decorators to refer to deleted tables that have recently been streamed to. Fix the resource names or wait at least 6 hours after streaming before querying a deleted table.
notImplemented 501 This job error returns when you try to access a feature that isn't implemented. Contact support for more information.
quotaExceeded 403 This error returns when your project exceeds a BigQuery quota, a custom quota, or when you haven't set up billing and exceed the free tier for queries. View the message property of the error object for more information about which quota was exceeded. To reset or raise a BigQuery quota, contact support. To modify a custom quota, submit the Custom Quota Request form.
rateLimitExceeded 403 This error returns if your project exceeds the concurrent rate limit or the API requests limit by sending too many requests too quickly. Slow down the request rate.

If you believe that your project did not exceed one of these limits, please contact support.

resourceInUse 400 This error returns when you try to delete a dataset that contains tables or when you try to delete a job that is currently running. Empty the dataset before attempting to delete it, or wait for a job to complete before deleting it.
resourcesExceeded 400 This error returns when your query uses too many resources.
  • Try breaking up the query into smaller pieces.
  • Try removing an ORDER BY clause.
  • If your query uses JOIN, ensure that the larger table is on the left-hand side of the clause.
  • If your query uses FLATTEN, determine if it's necessary for your use case. For more information, see nested and repeated data.
  • If your query uses EXACT_COUNT_DISTINCT, consider using COUNT(DISTINCT) instead.
  • If your query uses COUNT(DISTINCT <value>, <n>) with a large <n> value, consider using GROUP BY instead. For more information, see COUNT(DISTINCT).
  • If your query uses UNIQUE, consider using GROUP BY instead, or a window function inside of a subselect.
responseTooLarge 403 This error returns when your query's results are larger than the maximum response size. Some queries execute in multiple stages, and this error returns when any stage returns a response size that is too large, even if the final result is smaller than the maximum. This error commonly returns when queries use an ORDER BY clause. Adding a LIMIT clause can sometimes help, or removing the ORDER BY clause. If you want to ensure that large results can return, you can set the allowLargeResults property to true and specify a destination table.
stopped 200 This status code returns when a job is canceled.
tableUnavailable 400 Certain BigQuery tables are backed by data managed by other Google product teams. This error indicates that one of these tables is unavailable. When you encounter this error message, you can retry your request (see internalError troubleshooting suggestions) or contact the Google product team that granted you access to their data.

Sample error response

  "error": {
  "errors": [
    "domain": "global",
    "reason": "notFound",
    "message": "Not Found: Dataset myproject:foo"
  "code": 404,
  "message": "Not Found: Dataset myproject:foo"

Authentication errors

Errors thrown by the OAuth token generation system return the following JSON object, as defined by the OAuth2 specification.

{"error" : "description_string"}

The error is accompanied by either an HTTP 400 Bad Request error or an HTTP 401 Unauthorized error. description_string is one of the error codes defined by the OAuth2 specification. For example:


Back to top

Troubleshooting streaming inserts

The following sections discuss how to troubleshoot errors that occur when you stream data into BigQuery.

Failure HTTP response codes

If you receive a failure HTTP response code such as a network error, there's no way to tell if the streaming insert succeeded. If you try to simply re-send the request, you might end up with duplicated rows in your table. To help protect your table against duplication, set the insertId property when sending your request. BigQuery uses the insertId property for de-duplication.

If you receive a permission error, an invalid table name error or an exceeded quota error, no rows are inserted and the entire request fails.

Success HTTP response codes

Even if you receive a success HTTP response code, you'll need to check the insertErrors property of the response to determine if the row insertions were successful, because it's possible that BigQuery was only partially successful at inserting the rows.

If the insertErrors property is an empty list, all of the rows inserted successfully. Otherwise, except in cases where there was a schema mismatch in any of the rows, rows indicated in the insertErrors property were not inserted, and all other rows were inserted successfully. The errors property contains detailed information about why each unsuccessful row failed. The index property indicates the 0-based row index of the request that the error applies to.

If BigQuery encounters a schema mismatch on individual rows in the request, none of the rows are inserted and an insertErrors entry is returned for each row, even the rows that did not have a schema mismatch. Rows that did not have a schema mismatch will have an error with the reason property set to stopped, and can be re-sent as-is. Rows that failed include detailed information about the schema mismatch.

Metadata errors for streaming inserts

Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata exhibit are eventually consistent when interacting with the streaming system. In most cases metadata changes are propagated within minutes, but during this period API responses may reflect the inconsistent state of the table.

Some scenarios include:

  • Schema Changes - Modifying the schema of a table that has recently received streaming inserts may respond with schema mismatch errors, as the streaming system may not immediately pick up the schema change.
  • Table Creation/Deletion - Streaming to a nonexistent table will return a variation of a notFound response. Creating the table in response may not immediately be recognized by subsequent streaming inserts. Similarly, deleting and/or recreating a table may create a period of time where streaming inserts are effectively delivered to the old table and will not be present in the newly created table.
  • Table Truncation - Truncating a table's data (e.g. via a query job that uses writeDisposition of WRITE_TRUNCATE) may similarly cause subsequent inserts during the consistency period to be dropped.

Missing/Unavailable data

Streaming inserts reside temporarily in the streaming buffer, which has different availability characteristics than managed storage. Certain operations in BigQuery do not interact with the streaming buffer, such as table copy jobs and API methods like tabledata.list. As such, recent streaming data will not be present in the destination table or output.

Back to top

Send feedback about...