Quotas and limits

The following lists outline the rate limits and quota limits of BigQuery.

BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis. Specific policies vary depending on resource availability, user profile, service usage history, and other factors, and are subject to change without notice.

Requesting a quota increase

  • Some BigQuery quotas are displayed in Google Cloud Console. For these quotas, you can request a quota increase by using the Cloud Console. For step-by-step guidance through this process, open the How to request a quota increase tutorial in Google Cloud Console:

    Guide me

  • To request an increase for quotas that are not displayed in Google Cloud Console, contact Cloud Customer Care.

  • Some quotas and limits cannot be increased; for example, quotas and limits that provide system safeguards.

For more information, see Requesting a higher quota limit.

Query jobs

The following limits apply to query jobs created automatically by running interactive queries and to jobs submitted programmatically using jobs.query and query-type jobs.insert method calls.

  • Concurrent rate limit for interactive queries — 100 concurrent queries per project

    Queries with results that are returned from the query cache count against this limit for the duration it takes for BigQuery to determine that it is a cache hit. Dry run queries do not count against this limit. You can specify a dry run query using the --dry_run flag.

    For information about strategies to stay within this limit, see Troubleshooting quota errors. To raise the limit, contact Customer Care or contact sales.

  • Concurrent rate limit for interactive queries against Cloud Bigtable external data sources — 4 concurrent queries

    You are limited to 4 concurrent queries against a Bigtable external data source.

  • Concurrent rate limit for legacy SQL queries that contain user-defined functions (UDFs) — 6 concurrent queries

    The concurrent rate limit for legacy SQL queries that contain UDFs includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent rate limit for interactive queries. This limit does not apply to standard SQL queries.

  • Cross-region federated querying — 1 TB per project per day

    If the BigQuery query processing location and the Cloud SQL instance location are different, this is a cross-region query. You can run up to 1 TB in cross-region queries per project per day. See Cloud SQL federated queries.

  • Daily query size limit — Unlimited by default

    You can specify limits on the amount of data users can query by setting custom quotas.

  • Daily destination table update limit — 1,500 updates per table per day

    Destination tables in a query job are subject to the limit of 1,500 updates per table per day. Destination table updates include append and overwrite operations that are performed by queries that you run by using the Cloud Console, using the bq command-line tool, or calling the jobs.query and query-type jobs.insert API methods.

  • Query/script execution time limit — 6 hours

    This limit cannot be changed. In some cases, queries can be retried. When this happens, the retried query can run for an additional six hours and can be retried up to three times. This might result in a total run time above six hours.

  • Maximum number of tables referenced per query — 1,000

  • Maximum unresolved legacy SQL query length — 256 KB

  • Maximum unresolved standard SQL query length — 1 MB

  • Maximum resolved legacy and standard SQL query length — 12 MB

    The limit on resolved query length includes the length of all views and wildcard tables referenced by the query.

  • Maximum number of standard SQL query parameters — 10,000

  • Maximum response size — 10 GB compressed

    Sizes vary depending on compression ratios for the data. The actual response size may be significantly larger than 10 GB. The maximum response size is unlimited when writing large query results to a destination table.

  • Maximum row size — 100 MB

    The maximum row size limit is approximate, as the limit is based on the internal representation of row data. The maximum row size limit is enforced during certain stages of query job execution.

  • Maximum columns in a table, query result, or view definition — 10,000

  • Maximum concurrent slots per project for on-demand pricing — 2,000

    BigQuery slots are shared among all queries in a single project. BigQuery might burst beyond this limit to accelerate your queries.

    To check how many slots you're using, see Monitoring BigQuery using Cloud Monitoring.

    For information on limits that apply to user-defined functions in SQL queries, see UDF limits.

  • Maximum CPU usage per scanned data for on-demand pricing — 256 CPU seconds per MiB scanned

    With on-demand pricing, you can use up to approximately 256 CPU seconds per MiB of scanned data. If your query is too CPU-intensive for the amount of data being processed, the query fails with a billingTierLimitExceeded error.

    For more information, see billingTierLimitExceeded.

  • Scheduled queries

    Although scheduled queries use features of the BigQuery Data Transfer Service, scheduled queries are not transfers and are not subject to the load jobs quota. Scheduled queries are subject to the same BigQuery quotas and limits as manual queries.

Load jobs

The following limits apply to jobs created automatically by loading data using the Cloud Console or the bq command-line tool. The limits also apply to load jobs submitted programmatically by using the load-type jobs.insert API method.

The following limits apply when you load data into BigQuery.

  • Load jobs per table per day — 1,500 (including failures)
  • Load jobs per project per day — 100,000 (including failures)
  • Row and cell size limits:
    Data format Max limit
    CSV 100 MB (row and cell size)
    JSON 100 MB (row size)
  • Maximum columns per table — 10,000
  • Maximum File Sizes:
    File Type Compressed Uncompressed
    CSV 4 GB 5 TB
    JSON 4 GB 5 TB
  • Maximum size per load job — 15 TB across all input files for CSV, JSON, Avro, Parquet, and ORC
  • Maximum size for Avro file data blocks — 16 MB
  • Maximum number of source URIs in job configuration — 10,000 URIs
  • Maximum number of files per load job — 10 Million total files including all files matching all wildcard URIs
  • Load job execution time limit — 6 hours

With the exception of US-based datasets, you must load data from a Cloud Storage bucket in the same region as the dataset's location (the bucket can be either a multi-regional bucket or a regional bucket in the same region as the dataset). You can load data into a US-based dataset from any region.

If you regularly go over the load job quota due to frequent updates, consider streaming data into BigQuery instead.

For more information, see Introduction to loading data into BigQuery.

Copy jobs

The following limits apply to copying tables in BigQuery. The limits apply to jobs created automatically by copying data using the bq command-line tool or Cloud Console. The limits also apply to copy jobs submitted programmatically by using the copy-type jobs.insert API method.

  • Copy jobs per destination table per day — 1,000 (including failures)

  • Copy jobs per project per day — 100,000 (including failures)

  • Cross-region copy jobs per destination table per day — 100 (including failures)

  • Cross-region copy jobs per project per day — 2,000 (including failures)

The following limits apply to copying datasets:

  • Maximum number of tables in the source dataset — 20,000 tables

  • Maximum number of tables that can be copied per run to a destination dataset in the same region — 20,000 tables per run

  • Maximum number of tables that can be copied per run to a destination dataset in a different region — 1,000 tables per run

    For example, if you configure a cross-region copy of a dataset with 8,000 tables in it, then BigQuery Data Transfer Service automatically creates eight runs in a sequential manner. The first run copies 1,000 tables, then, 24 hours later, another run copies 1,000 tables, etc., until all tables in the dataset are copied, up to the maximum of 20,000 tables per dataset.

Export jobs

The following limits apply to jobs that export data from BigQuery. The following limits apply to jobs created automatically by exporting data using the bq command-line tool or the Cloud Console. The limits also apply to export jobs submitted programmatically by using the export-type jobs.insert API method.
  • Exports per day — 100,000 exports per project and up to 50 TB per day (the 50 TB data limit is cumulative across all exports)
  • To export more than 50 TB of data per day, use the Storage Read API or the EXPORT DATA statement.

  • Wildcard URIs — 500 wildcard URIs per export

Dataset limits

The following limits apply to datasets:

  • Number of datasets per project — unrestricted

  • Number of tables per dataset — unrestricted

    When you use an API call, enumeration performance slows as you approach 50,000 tables in a dataset. The Cloud Console can display up to 50,000 tables for each dataset.

  • Maximum number of authorized views in a dataset's access control list — 2,500

    You can create an authorized view to restrict access to your source data. An authorized view is created using a SQL query that excludes columns you do not want users to see when they query the view. You can add up to 2,500 authorized views to a dataset's access control list.

  • Maximum rate of dataset metadata update operations — 5 operations every 10 seconds per dataset

    The dataset metadata update limit includes all metadata update operations performed by using the Cloud Console, the bq command-line tool, or the client libraries, or by calling the datasets.insert, datasets.patch, or datasets.update API methods, or by executing the CREATE SCHEMA or ALTER SCHEMA DDL statements.

  • Maximum length of a dataset description — 16,384 characters

    When you add a description to a dataset, the text can be at most 16,384 characters.

Table limits

The following limits apply to BigQuery tables.

All tables

  • Maximum length of a column description — 1,024 characters

    When you add a description to a column, the text can be at most 1,024 characters.

  • Maximum depth of nested records — 15

    Columns of type RECORD can contain nested RECORD types, also called child records. The maximum nested depth limit is 15 levels. This limit is independent of whether the records are scalar or array-based (repeated).

Standard tables

  • Maximum number of table operations per day — 1,500

    You are limited to 1,500 operations per table per day whether the operation appends data to a table or truncates a table.

    The maximum number of table operations includes the combined total of all load jobs, copy jobs, and query jobs that append to or overwrite a destination table or that use a DML INSERT, UPDATE, DELETE, or MERGE statement to write data to a table. DML statements count towards this quota, but are not limited by it. In other words, the number of daily operations that are counted for the quota includes DML statements, but DML statements will not fail due to this limit.

    For example, if you run 500 copy jobs that append data to mytable and 1,000 query jobs that append data to mytable, you would reach the quota.

  • Maximum rate of table metadata update operations — 5 operations every 10 seconds per table

    The table metadata update limit includes all metadata update operations performed by using the Cloud Console, the bq command-line tool, or the client libraries, or by calling the tables.insert, tables.patch, or tables.update API methods, or executing ALTER TABLE DDL statements. This limit also includes the combined total of all load jobs, copy jobs, and query jobs that append to or overwrite a destination table. This is a transient error that you can retry with an exponential backoff. This limit doesn't apply to DML operations.

  • Maximum columns in a table, query result, or view definition — 10,000

Partitioned tables

  • Maximum number of partitions per partitioned table — 4,000

    Each partitioned table can have up to 4,000 partitions. If you need more than 4,000 partitions, consider using clustering in addition to, or instead of, partitioning. There is no limit for the number of clusters in a table. For information about using partitioning and clustering, see Partitioning versus clustering.

  • Maximum number of partitions modified by a single job — 4,000

    Each job operation (query or load) can affect a maximum of 4,000 partitions. Any query or load job that affects more than 4,000 partitions is rejected by BigQuery.

  • Maximum number of partition modifications per ingestion time partitioned table — 5,000

  • Maximum number of partition modifications per column partitioned table — 30,000

    You are limited to a total of 5,000 partition modifications per day for an ingestion time partitioned table and 30,000 partition modifications for a column partitioned table. For information about strategies to stay within these limits, see Troubleshooting quota errors.

    A partition can be modified by using an operation that appends to or overwrites data in the partition. Operations that modify partitions include: a load job, a query that writes results to a partition, or a DML statement (INSERT, DELETE, UPDATE, or MERGE) that modifies data in a partition.

    More than one partition may be affected by a single job. For example, a DML statement can update data in multiple partitions (for both ingestion-time and partitioned tables). Query jobs and load jobs can also write to multiple partitions but only for partitioned tables. DML statements count towards this quota, but are not limited by it. In other words, the number of daily operations that are counted for the quota includes DML statements, but DML statements will not fail due to this limit.

    BigQuery uses the number of partitions affected by a job when determining how much of the quota the job consumes. Streaming inserts do not affect this quota.

  • Maximum rate of partition operations — 50 partition operations every 10 seconds, per table

  • Maximum number of possible ranges for range partitioning — 10,000

    This limit applies to the partition specification when creating the table. After you create the table, the limit on the number of actual partitions also applies.

Table snapshots

  • Maximum number of concurrent table snapshot jobs — 100

    You can run up to 100 concurrent table snapshot jobs per project per region.

  • Maximum number of table snapshot jobs per day — 50,000

    You can run up to 50,000 table snapshot jobs per day per project per region.

  • Maximum number of jobs per table snapshot per day — 50

    You can run up to 50 jobs per day for a table snapshot. To raise the limit, contact support or contact sales.

  • Maximum number of metadata updates per table snapshot per 10 seconds — 5

    You can update a table snapshot's metadata up to 5 times every 10 seconds.

External tables

The following limits apply to tables with data stored on Cloud Storage in Parquet, ORC, Avro, CSV, or JSON format.

  • Maximum number of source URIs per external table — 10,000 URIs

  • Maximum number of files per external table — 10 million total files including all files matching all wildcard URIs

  • Maximum size of stored data on Cloud Storage per external table across all input files — 600 TB

    This limit applies to the file sizes as stored on Cloud Storage; this is not the same size as used in the query pricing formula. For externally partitioned tables, the limit is applied after partition pruning.

Table function limits

The following limits apply to table functions.

  • Maximum length of a table function name — 256 characters
  • Maximum number of arguments — 256
  • Maximum length of an argument name — 128 characters
  • Maximum depth of a table function reference chain — 16
  • Maximum depth of argument or output of type STRUCT — 15
  • Maximum number of fields in argument or return table of type STRUCT per table function — 1024
  • Maximum number of columns in return table — 1024
  • Maximum length of return table column names — 128
  • Maximum update rate per table function — 5 per 10 seconds. After you create a table function, you can update each function up to 5 times per 10 seconds.

Data manipulation language (DML) statements

BigQuery DML statements have no quota limits.

However, DML statements are counted toward the maximum number of table operations per day and partition modifications per day. DML statements will not fail due to these limits.

Row-level security

Description Quota
Maximum number of row access policies per table. 100
Total number of row access policies that a query can reference on tables with row access policies 100
CREATE / DROP DDL statements 5 per 10 seconds, per row access policy resource on a table
DROP ALL ROW ACCESS POLICIES statements 5 per 10 seconds, per table resource
Listing row access policies (using the `rowAccessPolicies.list` REST API) Default API quota
Getting row access policy IAM policies (using `rowAccessPolicies.getIamPolicy` API) Default IAM API quota

Streaming inserts

The following limits apply for streaming data into BigQuery.

If you do not populate the insertId field when you insert rows, the following quotas apply. For more information, see Disabling best effort de-duplication. This is the recommended way to use BigQuery in order to get higher streaming ingest quota limits. For information about strategies to stay within these limits, see Troubleshooting quota errors.

  • Maximum bytes per second — 1 GB

    If you don’t populate the insertId field for each row inserted, you are limited to 1 GB per second, per project. This limit applies at the project level. It does not apply to individual tables.

    Exceeding this amount will cause quotaExceeded errors.

If you populate the insertId field when you insert rows, the following quotas apply.

  • Maximum rows per second per project in the us and eu multi-regions — 500,000

    If you populate the insertId field for each row inserted, you are limited to 500,000 rows per second in the us and eu multi-regions, per project. This quota is cumulative within a given multi-region. In other words, the sum of rows per second streamed to all tables for a given project within a multi-region is limited to 500,000. Each table is additionally limited to 100,000 rows per second.

    Exceeding either the per-project limit or the per-table limit will cause quotaExceeded errors.

  • Maximum rows per second per project in all other locations — 100,000

    If you populate the insertId field for each row inserted, you are limited to 100,000 rows per second in all locations except the us and eu multi-regions, per project or table. This quota is cumulative within a given region. In other words, the sum of rows per second streamed to all tables for a given project within a region is limited to 100,000.

    Exceeding this amount will cause quotaExceeded errors.

  • Maximum rows per second per table — 100,000

    If you populate the insertId field for each row inserted, you are limited to 100,000 rows per second per table.

    Exceeding this amount will cause quotaExceeded errors.

  • Maximum bytes per second — 100 MB

    If you populate the insertId field for each row inserted, you are limited to 100 MB per second, per table.

    Exceeding this amount will cause quotaExceeded errors.

The following additional streaming quotas apply whether or not you populate the insertId field:

  • Maximum row size — 10 MB

    Exceeding this value will cause invalid errors.

  • HTTP request size limit — 10 MB (see Note)

    Exceeding this value will cause invalid errors.

  • Maximum rows per request — 50,000 rows per request

    A maximum of 500 rows is recommended. Batching can increase performance and throughput to a point, but at the cost of per-request latency. Too few rows per request and the overhead of each request can make ingestion inefficient. Too many rows per request and the throughput may drop.

    A maximum of 500 rows per request is recommended, but experimentation with representative data (schema and data sizes) will help you determine the ideal batch size.

  • insertId field length — 128

    Exceeding this value will cause invalid errors.

If you need more streaming quota for your project, you can disable best effort de-duplication. For quota increases beyond those limits, you can submit a request from the Google Cloud Console. Expect a response to your request within 2 to 3 business days.

UDF limits

The following limits apply to temporary and persistent user-defined functions in SQL queries.

  • The amount of data that your JavaScript UDF outputs when processing a single row — approximately 5 MB or less
  • Concurrent rate limit for legacy SQL queries that contain user-defined functions (UDFs) — 6 concurrent queries
  • The concurrent rate limit for legacy SQL queries that contain UDFs includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent rate limit for interactive queries. This limit does not apply to standard SQL queries

  • Maximum number of JavaScript UDF resources, such as inline code blobs or external files, in a query job — 50
  • Maximum size of each inline code blob — 32 KB
  • Maximum size of each external code resource — 1 MB
The following limits apply to persistent user-defined functions.
  • Maximum length of a function name — 256 characters
  • Maximum number of arguments — 256
  • Maximum length of an argument name — 128 characters
  • Maximum depth of a user-defined function reference chain — 16
  • Maximum depth of argument or output of type STRUCT — 15
  • Maximum number of fields in argument or output of type STRUCT per UDF — 1024
  • Maximum number of unique UDF plus table references per query — 1000

    After full expansion, each UDF can reference up to 1000 combined unique tables and UDFs.

  • Maximum number of JavaScript libraries in CREATE FUNCTION statement — 50
  • Maximum length of included JavaScript library paths — 5000 characters
  • Maximum update rate per UDF — 5 per 10 seconds

    After function creation, you can update each function up to 5 times per 10 seconds.

  • Each inline code blob is limited to a maximum size of 32 KB
  • Each JavaScript code resource is limited to a maximum size of 1 MB

View limits

  • Maximum number of nested view levels — 16

    BigQuery supports up to 16 levels of nested views. If there are more than 16 levels, an INVALID_INPUT error is returned.

  • Maximum length of a standard SQL query used to define a view — 256 K characters

    When you create a view, the text of the standard SQL query can be at most 256 K characters.

  • Maximum number of authorized views in a dataset's access control list — 2,500

    You can create an authorized view to restrict access to your source data. An authorized view is created using a SQL query that excludes columns you do not want users to see when they query the view. You can add up to 2,500 authorized views to a dataset's access control list.

API requests

All API requests

The following limits apply to all BigQuery API requests:

  • API requests per second, per user — 100

    If you make more than 100 requests per second, throttling might occur. This limit does not apply to streaming inserts.

  • Concurrent API requests, per user — 300

    If you make more than 300 concurrent requests per user, throttling might occur. This limit does not apply to streaming inserts.

tabledata.list requests

The tabledata.list method retrieves table data from a specified set of rows. Other APIs including jobs.getQueryResults and fetching results from jobs.query and jobs.insert may also consume this API's quota. The following limits apply to tabledata.list requests:

  • Maximum number of tabledata.list queries per project — 1000/second

    When you call tabledata.list, you can submit up to 1000 requests per second, per project.

  • Maximum bytes per second per project returned by calls to tabledata.list — 60 MB/second

    When you call tabledata.list, you can return a maximum of 60 MB per second of table row data per project. The limit applies to the project that contains the table being read.

  • Maximum rows per second per project returned by calls to tabledata.list — 150,000/second

    When you call tabledata.list, you can return a maximum of 150,000 table rows per second per project. The limit applies to the project that contains the table being read.

  • Maximum number of rows returned by a call to tabledata.list — 100,000 rows

    When you call tabledata.list, the response can contain a maximum of 100,000 table rows. For more information, see Paging through results using the API.

tables.insert requests

The tables.insert method creates a new, empty table in a dataset. The following limits apply to tables.insert requests:

  • Maximum requests per second per project — 10 requests

    When you call tables.insert, you can create a maximum of 10 requests per second per project. This limit includes statements that create tables such as the CREATE TABLE DDL statement, and queries that write results to destination tables.

projects.list requests

The projects.list method lists all projects to which you have been granted access. The following limits apply to projects.list requests:

  • Maximum requests per second per project — 2

    When you call projects.list, you can create a maximum of 2 requests per second per project.

jobs.get requests

The jobs.get method returns information about a specific job. The following limits apply to jobs.get requests:

  • Maximum requests per second per project — 1,000

    When you call jobs.get, you can create a maximum of 1,000 requests per second per project.

jobs.query requests

The jobs.query method runs a SQL query synchronously and returns query results if the query completes within a specified timeout.

  • Maximum response size — 10 MB

    By default, there is no maximum row count for the number of rows of data to return per page of results. However, you are limited to the 10 MB maximum response size. You can alter the number of rows to return by using the maxResults parameter.

IAM API requests

The following limits apply when using Identity and Access Management functionality in BigQuery to retrieve and set IAM policies, and to test IAM permissions.

  • Maximum requests per user — 25

    You are limited to 25 IAM requests per second per user per project.

  • Maximum requests per project — 50

    You are limited to 50 IAM requests per second per project.

If you need more IAM quota for your project, you can submit a request from the Google Cloud Console. Expect a response to your request within 2 to 3 business days.

BigQuery Storage Read API requests

Storage Read API is subject to the following quotas and limits.

  • Maximum row restriction/filter length — 1 MB

    When you use the Storage Read API CreateReadSession call, you are limited to a maximum row restriction/filter length of 1 MB.

  • Read data plane requests — 5,000 calls per minute per user per project

    You are limited to 5,000 ReadRows calls per minute, per user, per project.

  • Read control plane requests — 5,000 calls per minute per user per project

    You are limited to a total of 5,000 Storage Read API metadata operation calls (CreateReadSession and SplitReadStream) per minute, per user, per project.

BigQuery Storage Write API requests

Storage Write API is subject to the following quotas and limits.

  • Rate limit for creating write streams — 100 per minute per project

    Calls to CreateWriteStream are rate limited. If you reach this limit, retry the operation with exponential backoff. Also, try to space out calls to CreateWriteStream. The default stream is not subject to this quota. If you don't need data de-duplication with committed mode, then consider using the default stream.

  • Pending bytes — 100 GB per project

    This limit is the maximum number of bytes that you can write in pending mode before committing the streams.

  • Concurrent connections — 1,000 per project

  • Maximum number of streams passed to BatchCommitWriteStreams — 100

    In pending mode, the BatchCommitWriteStreams method takes a list of streams to commit. If you exceed this limit, then the method returns an error. As a workaround, call BatchCommitWriteStreams multiple times.

When are quotas refilled?

Daily quotas are replenished at regular intervals throughout the day, reflecting their intent to guide rate limiting behaviors. Intermittent refresh is also done to avoid long disruptions when quota is exhausted. More quota is typically made available within minutes rather than globally replenished once daily.

Troubleshooting

For information about troubleshooting errors related to quota limits, see Troubleshooting BigQuery quota errors.

Capping quota usage

To learn how you can limit usage of a particular resource, up to the limit specified by Google, see Capping usage.