Error messages

This document describes error messages you might encounter when working with BigQuery, including HTTP error codes, job errors, and Google Cloud console error messages. Job errors are represented in the status object when calling jobs.get.

Error table

Responses from the BigQuery API include an HTTP error code and an error object in the response body. An error object is typically one of the following:

  • An errors object, which contains an array of ErrorProto objects.
  • An errorResults object, which contains a single ErrorProto object.

The Error message column in the following table maps to the reason property in an ErrorProto object.

The table does not include all possible HTTP errors or other networking errors. Therefore, don't assume that an error object is present in every error response from BigQuery. In addition, you might receive different errors or error objects if you use the Cloud Client Libraries for the BigQuery API. For more information, see BigQuery API Client Libraries.

If you receive an HTTP response code that doesn't appear in the following table, the response code indicates an issue or an expected result with the HTTP request. Response codes in the 5xx range indicate a server-side error. If you receive a 5xx response code, then retry the request later. In some cases, a 5xx response code might be returned by an intermediate server such as a proxy. Examine the response body and response headers for details about the error. For a full list of HTTP response codes, see HTTP response codes.

If you use the bq command-line tool to check job status, the error object is not returned by default. To view the error object and the corresponding reason property that maps to the following table, use the --format=prettyjson flag. For example, bq --format=prettyjson show -j <job id>. To view verbose logging for the bq tool, use --apilog=stdout. To learn more about troubleshooting the bq tool, see Debugging.

Error message HTTP code Description Troubleshooting
accessDenied 403 This error returns when you try to access a resource such as a dataset, table, view, or job that you don't have access to. This error also returns when you try to modify a read-only object. Contact the resource owner and request access to the resource for the user identified by the principalEmail value in the error's audit log.
backendError 500 or 503 This error returns when there is a temporary server failure such as a network connection problem or a server overload. In general, wait a few seconds and try again. However, there are two special cases for troubleshooting this error: jobs.get calls and jobs.insert calls.

jobs.get calls

  • If you received a 503 error when polling jobs.get, wait a few seconds and poll again.
  • If the job completes but includes an error object that contains backendError, the job failed. You can safely retry the job without concerns about data consistency.

jobs.insert calls

If you receive this error when making a jobs.insert call, it's unclear if the job succeeded. In this situation, you'll need to retry the job.

badRequest 400 The error 'UPDATE or DELETE statement over table <project.dataset.table> would affect rows in the streaming buffer, which is not supported' can occur when some recently streamed rows in a table might not be available for DML operations (DELETE, UPDATE,MERGE), typically for a few minutes, but in rare cases, up to 90 minutes. For more information, see Streaming data availability and DML Limitations. To see if data is available for table DML operations, check the tables.get response for the streamingBuffer section. If the streamingBuffer section is absent, then table data is available for DML operations. You can also use the streamingBuffer.oldestEntryTime field to identify the age of records in the streaming buffer.
billingNotEnabled 403 This error returns when billing isn't enabled for the project. Enable billing for the project in the Google Cloud console.
billingTierLimitExceeded 400 This error returns when the value of statistics.query.billingTier for an on-demand Job exceeds 100. This occurs when on-demand queries use too much CPU relative to the amount of data scanned. For instructions on how to inspect job statistics, see Managing jobs. This error most often results from executing inefficient cross-joins, either explicitly or implicitly, for example due to an inexact join condition. These types of queries are not suitable for on-demand pricing due to high resource consumption, and in general they may not scale well. You can either optimize the query or switch to use flat-rate pricing to resolve this error. For information about optimizing queries, see Avoiding SQL anti-patterns.
blocked 403 This error returns when BigQuery has temporarily denylisted the operation you attempted to perform, usually to prevent a service outage. Contact support for more information.
duplicate 409 This error returns when trying to create a job, dataset, or table that already exists. The error also returns when a job's writeDisposition property is set to WRITE_EMPTY and the destination table accessed by the job already exists. Rename the resource you're trying to create, or change the writeDisposition value in the job.
internalError 500 This error returns when an internal error occurs within BigQuery. Wait according to the back-off requirements described in the BigQuery Service Level Agreement, then try the operation again. If the error continues to occur, contact support or file a bug using the BigQuery issue tracker. You can also try to reduce the frequency of this error by using Reservations.
invalid 400 This error returns when there is any type of invalid input other than an invalid query, such as missing required fields or an invalid table schema. Invalid queries return an invalidQuery error.
invalidQuery 400 This error returns when you attempt to run an invalid query. Check your query for syntax errors. The query reference contains descriptions and examples of how to construct valid queries.
invalidUser 400 This error returns when you attempt to schedule a query with invalid user credentials. Refresh the user credentials, as explained in Scheduling queries.
jobBackendError 400 This error returns when the job was created successfully, but failed with an internal error. You may see this error in jobs.query or jobs.getQueryResults. Retry the job with a new jobId. If the error continues to occur, contact support.
jobInternalError 400 This error returns when the job was created successfully, but failed with an internal error. You may see this error in jobs.query or jobs.getQueryResults. Retry the job with a new jobId. If the error continues to occur, contact support.
notFound 404 This error returns when you refer to a resource (a dataset, a table, or a job) that doesn't exist, or when the location in the request does not match the location of the resource (for example, the location in which a job is running). This can also occur when using table decorators to refer to deleted tables that have recently been streamed to. Fix the resource names, correctly specify the location, or wait at least 6 hours after streaming before querying a deleted table.
notImplemented 501 This job error returns when you try to access a feature that isn't implemented. Contact support for more information.
quotaExceeded 403 This error returns when your project exceeds a BigQuery quota, a custom quota, or when you haven't set up billing and you have exceeded the free tier for queries. View the message property of the error object for more information about which quota was exceeded. To reset or raise a BigQuery quota, contact support. To modify a custom quota, submit a request from the Google Cloud console page. If you receive this error using the BigQuery sandbox, you can upgrade from the sandbox.

For more information, see Troubleshooting BigQuery quota errors.

rateLimitExceeded 403 This error returns if your project exceeds a short-term rate limit by sending too many requests too quickly. For example, see the rate limits for query jobs and rate limits for API requests. Slow down the request rate.

If you believe that your project did not exceed one of these limits, contact support.

For more information, see Troubleshooting BigQuery quota errors.

resourceInUse 400 This error returns when you try to delete a dataset that contains tables or when you try to delete a job that is currently running. Empty the dataset before attempting to delete it, or wait for a job to complete before deleting it.
resourcesExceeded 400 This error returns when your job uses too many resources.

For query jobs:

  • Get more details about the resource warning for your queries.
  • Try breaking up the query into smaller pieces.
  • Try removing an ORDER BY clause.
  • If your query uses JOIN, ensure that the larger table is on the left side of the clause.
  • If your query uses FLATTEN, determine if it's necessary for your use case. For more information, see nested and repeated data.
  • If your query uses EXACT_COUNT_DISTINCT, consider using COUNT(DISTINCT) instead.
  • If your query uses COUNT(DISTINCT <value>, <n>) with a large <n> value, consider using GROUP BY instead. For more information, see COUNT(DISTINCT).
  • If your query uses UNIQUE, consider using GROUP BY instead, or a window function inside of a subselect.
  • If your query materializes many rows using a LIMIT clause, consider filtering on another column, for example ROW_NUMBER(), or removing the LIMIT clause altogether to allow write parallelization.
  • If your query used deeply nested views and a WITH clause, this can cause an exponential growth in complexity, thereby reaching the limits.
  • Don't replace temporary tables with WITH clauses. The clause might have to be recalculated several times, which can make the query complex and therefore slow. Persisting intermediate results in temporary tables instead helps with the complexity
  • Avoid using UNION ALL queries.

For more information, see Optimize query computation.

For load jobs:

If you are loading Avro or Parquet files, reduce the row size in the files. Check for specific size restrictions for the file format that you are loading:

If you get this error when loading ORC files, contact Support.

responseTooLarge 403 This error returns when your query's results are larger than the maximum response size. Some queries execute in multiple stages, and this error returns when any stage returns a response size that is too large, even if the final result is smaller than the maximum. This error commonly returns when queries use an ORDER BY clause. Adding a LIMIT clause can sometimes help, or removing the ORDER BY clause. If you want to ensure that large results can return, you can set the allowLargeResults property to true and specify a destination table. For more information, see Writing large query results.
stopped 200 This status code returns when a job is canceled.
tableUnavailable 400 Certain BigQuery tables are backed by data managed by other Google product teams. This error indicates that one of these tables is unavailable. When you encounter this error message, you can retry your request (see internalError troubleshooting suggestions) or contact the Google product team that granted you access to their data.
timeout 400 The job timed out. Consider reducing the amount of work performed by your operation so that it can complete within the set limit. See Quotas and Limits.

Sample error response

GET https://bigquery.googleapis.com/bigquery/v2/projects/12345/datasets/foo
Response:
[404]
{
  "error": {
  "errors": [
  {
    "domain": "global",
    "reason": "notFound",
    "message": "Not Found: Dataset myproject:foo"
  }],
  "code": 404,
  "message": "Not Found: Dataset myproject:foo"
  }
}

Authentication errors

Errors thrown by the OAuth token generation system return the following JSON object, as defined by the OAuth2 specification.

{"error" : "description_string"}

The error is accompanied by either an HTTP 400 Bad Request error or an HTTP 401 Unauthorized error. description_string is one of the error codes defined by the OAuth2 specification. For example:

{"error":"invalid_client"}

Back to top

Troubleshooting streaming inserts

The following sections discuss how to troubleshoot errors that occur when you stream data into BigQuery using the legacy streaming API. For more information on how to resolve quota errors for streaming inserts, see Streaming insert quota errors.

Failure HTTP response codes

If you receive a failure HTTP response code such as a network error, there's no way to tell whether the streaming insert succeeded. If you try to simply re-send the request, you might end up with duplicated rows in your table. To help protect your table against duplication, set the insertId property when sending your request. BigQuery uses the insertId property for de-duplication.

If you receive a permission error, an invalid table name error, or an exceeded quota error, no rows are inserted and the entire request fails.

Success HTTP response codes

Even if you receive a success HTTP response code, you'll need to check the insertErrors property of the response to determine whether the row insertions were successful because it's possible that BigQuery was only partially successful at inserting the rows. You might encounter one of the following scenarios:

  • All rows inserted successfully. If the insertErrors property is an empty list, all of the rows were inserted successfully.
  • Some rows inserted successfully. Except in cases where there is a schema mismatch in any of the rows, rows indicated in the insertErrors property are not inserted, and all other rows are inserted successfully. The errors property contains detailed information about why each unsuccessful row failed. The index property indicates the 0-based row index of the request that the error applies to.
  • None of the rows inserted successfully. If BigQuery encounters a schema mismatch on individual rows in the request, none of the rows are inserted and an insertErrors entry is returned for each row, even the rows that did not have a schema mismatch. Rows that did not have a schema mismatch have an error with the reason property set to stopped, and can be re-sent as-is. Rows that failed include detailed information about the schema mismatch. To learn about the supported protocol buffer types for each BigQuery data type, see Data type conversions.

Metadata errors for streaming inserts

Because BigQuery's streaming API is designed for high insertion rates, modifications to the underlying table metadata exhibit are eventually consistent when interacting with the streaming system. Most of the time, metadata changes are propagated within minutes, but during this period API responses might reflect the inconsistent state of the table.

Some scenarios include:

  • Schema Changes. Modifying the schema of a table that has recently received streaming inserts can cause responses with schema mismatch errors because the streaming system might not immediately pick up the schema change.
  • Table Creation/Deletion. Streaming to a nonexistent table returns a variation of a notFound response. A table created in response might not immediately be recognized by subsequent streaming inserts. Similarly, deleting or recreating a table can create a period of time where streaming inserts are effectively delivered to the old table. The streaming inserts might not be present in the new table.
  • Table Truncation. Truncating a table's data (by using a query job that uses writeDisposition of WRITE_TRUNCATE) can similarly cause subsequent inserts during the consistency period to be dropped.

Missing/Unavailable data

Streaming inserts reside temporarily in the write-optimized storage, which has different availability characteristics than managed storage. Certain operations in BigQuery do not interact with the write-optimized storage, such as table copy jobs and API methods like tabledata.list. Recent streaming data won't be present in the destination table or output.

Back to top

Google Cloud console error messages

The following table lists error messages that you might see while you work in the Google Cloud console.

Error message Description Troubleshooting
Unknown error response from the server. This error displays when the Google Cloud console receives an unknown error from the server; for example, when you click a dataset or other type of link, and the page cannot be displayed. Try switching to your browser's incognito, or private, mode and repeating the action that resulted in the error. If no error results in incognito mode, then the error might be due to a browser extension, such as an ad blocker. Try disabling your browser extensions while not in incognito mode, and see if that resolves the issue.

Back to top