The following lists outline the rate limits and quota limits of BigQuery.
BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis. Specific policies vary depending on resource availability, user profile, service usage history, and other factors, and are subject to change without notice.
To request a quota increase, see Requesting a higher quota limit.
Query jobs
The following limits apply to query jobs created automatically by running interactive queries and to jobs submitted programmatically using jobs.query and query-type jobs.insert method calls.
- Concurrent rate limit for interactive queries — 100 concurrent queries
Queries with results that are returned from the
query cache count against this
limit for the duration it takes for BigQuery to determine that it
is a cache hit. Dry run queries do not count against this limit.
You can specify a dry run query using the
--dry_run
flag
or by setting the dryRun
property in a query job.
This limit is applied at the project level. To raise the limit, contact support or contact sales.
- Concurrent rate limit for interactive queries against Bigtable external data sources — 4 concurrent queries
You are limited to 4 concurrent queries against a Bigtable external data source.
- Concurrent rate limit for legacy SQL queries that contain user-defined functions (UDFs) — 6 concurrent queries
The concurrent rate limit for legacy SQL queries that contain UDFs includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent rate limit for interactive queries. This limit does not apply to standard SQL queries.
- Cross-region federated querying — 1 TB per project per day
If the BigQuery query processing location and the Cloud SQL instance location are different, this is a cross-region query. You can run up to 1 TB in cross-region queries per project per day. See Cloud SQL federated queries.
- Daily query size limit — Unlimited by default
You may specify limits on the amount of data users can query by setting custom quotas.
- Daily destination table update limit — 1,500 updates per table per day
Destination tables in a query job are subject to
the limit of 1,500 updates per table per day. Destination table updates include
append and overwrite operations that are performed by queries that you run by
using the Cloud Console, using the bq
command-line tool, or calling the
jobs.query
and query-type
jobs.insert
API methods.
- Query/script execution time limit — 6 hours
This limit cannot be changed. In some cases, queries can be retried. When this happens, the retried query can run for an additional six hours and can be retried up to three times. This might result in a total run time above six hours.
Maximum number of tables referenced per query — 1,000
Maximum unresolved legacy SQL query length — 256 KB
Maximum unresolved standard SQL query length — 1 MB
Maximum resolved legacy and standard SQL query length — 12 MB
The limit on resolved query length includes the length of all views and wildcard tables referenced by the query.
Maximum number of standard SQL query parameters — 10,000
Maximum response size — 10 GB compressed1
1Sizes vary depending on compression ratios for the data. The actual response size may be significantly larger than 10 GB.
The maximum response size is unlimited when writing large query results to a destination table.
- Maximum row size — 100 MB2
2The maximum row size limit is approximate, as the limit is based on the internal representation of row data. The maximum row size limit is enforced during certain stages of query job execution.
Maximum columns in a table, query result, or view definition — 10,000
Maximum concurrent slots per project for on-demand pricing — 2,000
BigQuery slots are shared among all queries in a single project. BigQuery might burst beyond this limit to accelerate your queries.
To check how many slots you're using, see Monitoring BigQuery using Cloud Monitoring.
- Maximum concurrent queries against a Bigtable external data source — 4
For information on limits that apply to user-defined functions in SQL queries, see UDF limits.
- Scheduled queries
Although scheduled queries use features of the BigQuery Data Transfer Service, scheduled queries are not transfers and are not subject to the load jobs quota. Scheduled queries are subject to the same BigQuery quotas and limits as manual queries.
Load jobs
The following limits apply to jobs created automatically by loading data using the
Cloud Console or the bq
command-line tool. The limits also apply to load jobs submitted
programmatically by using the load-type jobs.insert
API method.
The following limits apply when you load data into BigQuery.
- Load jobs per table per day — 1,500 (including failures)
- Load jobs per project per day — 100,000 (including failures)
- Row and cell size limits:
Data format Max limit CSV 100 MB (row and cell size) JSON 100 MB (row size) - Maximum columns per table — 10,000
- Maximum File Sizes:
File Type Compressed Uncompressed CSV 4 GB 5 TB JSON 4 GB 5 TB - Maximum size per load job — 15 TB across all input files for CSV, JSON, Avro, Parquet, and ORC
- Maximum number of source URIs in job configuration — 10,000 URIs
- Maximum number of files per load job — 10 Million total files including all files matching all wildcard URIs
- Load job execution time limit — 6 hours
- With the exception of US-based datasets, you must load data from a Cloud Storage bucket in the same region as the dataset's location (the bucket can be either a multi-regional bucket or a regional bucket in the same region as the dataset). You can load data into a US-based dataset from any region.
For more information, see Introduction to loading data into BigQuery.
Copy jobs
The following limits apply to copying tables
in BigQuery. The limits apply to jobs created automatically by copying data using
the bq
command-line tool or the Cloud Console. The limits also apply to copy jobs submitted
programmatically by using the copy-type
jobs.insert
API method.
- Copy jobs per destination table per day — 1,000 (including failures)
- Copy jobs per project per day — 100,000 (including failures)
Export jobs
The following limits apply to jobs that export data
from BigQuery. The following limits apply to jobs created automatically by
exporting data using the bq
command-line tool or the Cloud Console. The limits also apply to export
jobs submitted programmatically by using the export-type
jobs.insert
API method.
- Exports per day — 100,000 exports per project and up to 50 TB per day (the 50 TB data limit is cumulative across all exports)
- Wildcard URIs — 500 wildcard URIs per export
To export more than 50 TB of data per day, use the
Storage Read API or the
EXPORT DATA
statement.
Dataset limits
The following limits apply to datasets:
Number of datasets per project — unrestricted
- Number of tables per dataset — unrestricted
- When you use an API call, enumeration performance slows as you approach 50,000 tables in a dataset. The Cloud Console can display up to 50,000 tables for each dataset.
- Maximum number of authorized views in a dataset's access control list — 2,500
- You can create an authorized view to restrict access to your source data. An authorized view is created using a SQL query that excludes columns you do not want users to see when they query the view. You can add up to 2,500 authorized views to a dataset's access control list.
- Maximum rate of dataset metadata update operations — 5 operations every 10 seconds per dataset
- The dataset metadata update limit includes all metadata update operations
performed by using the Cloud Console, the
bq
command-line tool, or the client libraries, or by calling thedatasets.insert
,datasets.patch
, ordatasets.update
API methods, or by executing theCREATE SCHEMA
orALTER SCHEMA
DDL statements.
- Maximum length of a dataset description — 16,384 characters
- When you add a description to a dataset, the text can be at most 16,384 characters.
Table limits
The following limits apply to BigQuery tables.
All tables
- Maximum length of a column description — 1,024 characters
When you add a description to a column, the text can be at most 1,024 characters.
Standard tables
- Maximum number of table operations per day — 1,500
You are limited to 1,500 operations per table per day whether the operation appends data to a table or truncates a table.
The maximum number of table operations includes the
combined total of all load jobs,
copy jobs, and
query jobs
that append to or overwrite a destination table or that use a DML
INSERT
, UPDATE
, DELETE
, or MERGE
statement to write data to a table. DML statements count towards this quota, but
are not limited by it. In other words, the number of daily operations
that are counted for the quota includes DML statements, but DML statements will
not fail due to this limit.
For example, if
you run 500 copy jobs that append data to mytable
and 1,000
query jobs that append data to mytable
, you would reach the quota.
- Maximum rate of table metadata update operations — 5 operations every 10 seconds per table
The table metadata update limit includes all
metadata update operations performed by using the Cloud Console, the
bq
command-line tool, or the client libraries, or by calling the
tables.insert
,
tables.patch
,
or tables.update
API methods, or executing
ALTER TABLE
DDL statements. This limit also applies to
job output. This is a transient
error that you can retry with an exponential backoff. This limit doesn't apply
to DML operations.
- Maximum columns in a table, query result, or view definition — 10,000
Partitioned tables
Maximum number of partitions per partitioned table — 4,000
Maximum number of partitions modified by a single job — 4,000
Each job operation (query or load) can affect a maximum of 4,000 partitions. Any query or load job that affects more than 4,000 partitions is rejected by BigQuery.
- Maximum number of partition modifications per ingestion time partitioned table — 5,000
- Maximum number of partition modifications per column partitioned table — 30,000
You are limited to a total of 5,000 partition
modifications per day for an ingestion time partitioned table and 30,000 partition
modifications for a column partitioned table. A partition can be modified
by using an operation that appends to or overwrites data in the partition.
Operations that modify partitions include: a load job, a query that writes
results to a partition, or a DML statement (INSERT
, DELETE
,
UPDATE
, or MERGE
) that modifies data in a partition.
More than one partition may be affected by a single job. For example, a DML statement can update data in multiple partitions (for both ingestion-time and partitioned tables). Query jobs and load jobs can also write to multiple partitions but only for partitioned tables. DML statements count towards this quota, but are not limited by it. In other words, the number of daily operations that are counted for the quota includes DML statements, but DML statements will not fail due to this limit. BigQuery uses the number of partitions affected by a job when determining how much of the quota the job consumes. Streaming inserts do not affect this quota.
- Maximum rate of partition operations — 50 partition operations every 10 seconds
External tables
The following limits apply to tables with data stored on Cloud Storage in Parquet, ORC, Avro, CSV, or JSON format.
- Maximum number of source URIs per external table — 10,000 URIs
- Maximum number of files per external table — 10 million total files including all files matching all wildcard URIs
- Maximum size of stored data on Cloud Storage per external table across all input files — 600 TB
This limit applies to the file sizes as stored on Cloud Storage; this is not the same size as used in the query pricing formula. For externally partitioned tables, the limit is applied after partition pruning.
View limits
- Maximum number of nested view levels — 16
BigQuery supports up to 16 levels of
nested views. If there are more than 16 levels, an INVALID_INPUT
error is returned.
- Maximum length of a standard SQL query used to define a view — 256 K characters
When you create a view, the text of the standard SQL query can be at most 256 K characters.
- Maximum number of authorized views in a dataset's access control list — 2,500
You can create an authorized view to restrict access to your source data. An authorized view is created using a SQL query that excludes columns you do not want users to see when they query the view. You can add up to 2,500 authorized views to a dataset's access control list.
UDF limits
The following limits apply to temporary and persistent user-defined functions in SQL queries.
- The amount of data that your JavaScript UDF outputs when processing a single row — approximately 5 MB or less.
- Concurrent rate limit for legacy SQL queries that contain user-defined functions (UDFs) — 6 concurrent queries.
- Maximum number of JavaScript UDF resources, such as inline code blobs or external files, in a query job — 50
- Maximum size of each inline code blob — 32 KB.
- Maximum size of each external code resource — 1 MB.
The concurrent rate limit for legacy SQL queries that contain UDFs includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent rate limit for interactive queries. This limit does not apply to standard SQL queries.
The following limits apply to persistent user-defined functions.
- Maximum length of a function name — 256 characters
- Maximum number of arguments — 256
- Maximum length of a argument name — 128 characters
- Maximum depth of a user-defined function reference chain — 16
- Maximum depth of argument or output of type STRUCT — 15
- Maximum number of fields in argument or output of type STRUCT per UDF — 1024
- Maximum number of unique UDF plus table references per query — 1000 After full expansion, each UDF can reference up to 1000 combined unique tables and UDFs.
- Maximum number of JavaScript libraries in CREATE FUNCTION statement — 50
- Max length of included JavaScript library paths — 5000 characters
- Maximum update rate per UDF — 5 per 10 seconds After function creation, you can update each function up to 5 times per 10 seconds.
- Each inline code blob is limited to a maximum size of 32 KB.
- Each JavaScript code resource is limited to a maximum size of 1 MB.
Data manipulation language (DML) statements
BigQuery DML statements have no quota limits.
However, DML statements are counted toward the maximum number of table operations per day and partition modifications per day. DML statements will not fail due to these limits.
Streaming inserts
The following limits apply for streaming data into BigQuery.
If you do not populate the insertId
field when you insert rows, the following
quotas apply. For more information, see
Disabling best effort de-duplication.
This is the recommended way to use BigQuery in order to get higher
streaming ingest quota limits.
- Maximum bytes per second: 1 GB — If you don’t populate
insertId
field for each row inserted, you are limited to 1 GB per second, per project. This limit applies at the project level. It does not apply to individual tables. Exceeding this amount will causequotaExceeded
errors.
If you populate the insertId
field when you insert rows, the following quotas
apply.
- Maximum rows per second per project in the
us
andeu
multi-regions: 500,000 - If you populate the
insertId
field for each row inserted, you are limited to 500,000 rows per second in theus
andeu
multi-regions, per project. This quota is cumulative within a given multi-region. In other words, the sum of rows per second streamed to all tables for a given project within a multi-region is limited to 500,000. Each table is additionally limited to 100,000 rows per second.
Exceeding either the per-project limit or the per-table limit will causequotaExceeded
errors.
- Maximum rows per second per project in the
- Maximum rows per second per project in all other locations: 100,000
- If you populate the
insertId
field for each row inserted, you are limited to 100,000 rows per second in all locations except theus
andeu
multi-regions, per project or table. This quota is cumulative within a given region. In other words, the sum of rows per second streamed to all tables for a given project within a region is limited to 100,000.
Exceeding this amount will causequotaExceeded
errors.
- Maximum rows per second per table: 100,000
- If you populate the
insertId
field for each row inserted, you are limited to 100,000 rows per second per table.
Exceeding this amount will causequotaExceeded
errors.
- Maximum bytes per second: 100 MB
- If you populate the
insertId
field for each row inserted, you are limited to 100 MB per second, per table.
Exceeding this amount will causequotaExceeded
errors.
The following additional streaming quotas apply whether or not you populate the
insertId
field:
- Maximum row size: 10 MB
- Exceeding this value will cause
invalid
errors.
- HTTP request size limit: 10 MB (see Note)
- Exceeding this value will cause
invalid
errors.
- Maximum rows per request: 10,000 rows per request
- A maximum of 500 rows is recommended. Batching can increase performance
and throughput to a point, but at the cost of per-request latency.
Too few rows per request and the overhead of each request can make ingestion
inefficient. Too many rows per request and the throughput may drop.
A maximum of 500 rows per request is recommended, but experimentation with representative data (schema and data sizes) will help you determine the ideal batch size.
insertId
field length: 128- Exceeding this value will cause
invalid
errors.
If you need more streaming quota for your project, you can disable best effort de-duplication. For quota increases beyond those limits, you can submit a request from the Google Cloud Console. Expect a response to your request within 2 to 3 business days.
API requests
All API requests
The following limits apply to all BigQuery API requests:
- API requests per second, per user — 100
- If you make more than 100 requests per second, throttling might occur. This limit does not apply to streaming inserts.
- Concurrent API requests, per user: 300
- If you make more than 300 concurrent requests per user, throttling might occur. This limit does not apply to streaming inserts.
tabledata.list
requests
The tabledata.list
method
retrieves table data from a specified set of rows. Other APIs including jobs.getQueryResults
and fetching results from jobs.query
and jobs.insert may also consume this API's quota. The following limits apply
to tabledata.list
requests:
- Maximum number of
tabledata.list
queries per project: 1000/second - When you call
tabledata.list
, you can submit up to 1000 requests per second, per project.
- Maximum number of
- Maximum bytes per second per project returned by calls to
tabledata.list
: 60 MB/second - When you call
tabledata.list
, you can return a maximum of 60 MB per second of table row data per project. The limit applies to the project that contains the table being read.
- Maximum bytes per second per project returned by calls to
- Maximum rows per second per project returned by calls to
tabledata.list
: 150,000/second - When you call
tabledata.list
, you can return a maximum of 150,000 table rows per second per project. The limit applies to the project that contains the table being read.
- Maximum rows per second per project returned by calls to
tables.insert
requests
The tables.insert
method
creates a new, empty table in a dataset. The following limits apply
to tables.insert
requests:
- Maximum requests per second per project: 10
: When you call
tables.insert
, you can create a maximum of 10 requests per second per project. This limit includes statements that create tables such as theCREATE TABLE
DDL statement, and queries that write results to destination tables.
projects.list
requests
The projects.list
method
lists all projects to which you have been granted access. The following limits
apply to projects.list
requests:
- Maximum requests per second per project: 2
: When you call
projects.list
, you can create a maximum of 2 requests per second per project.
jobs.get
requests
The jobs.get
method
returns information about a specific job. The following limits apply
to jobs.get
requests:
- Maximum requests per second per project: 1,000
: When you call
jobs.get
, you can create a maximum of 1,000 requests per second per project.
jobs.query
requests
The jobs.query
method runs a
SQL query synchronously and returns query results if the query completes within
a specified timeout.
- Maximum response size: 10 MB
: By default, there is no maximum row count for the number of rows of data to
return per page of results. However, you are limited to the 10 MB maximum
response size. You can alter the number of rows to return by using the
maxResults
parameter.
IAM API requests
The following limits apply when using Identity and Access Management functionality in BigQuery to retrieve and set IAM policies, and to test IAM permissions.
At the Google Cloud project level, there is a limit of 25 requests per second, per user.
At the Google Cloud project level, there is a limit of 50 requests per second.
If you need more IAM quota for your project, you can submit a request from the Google Cloud Console. Expect a response to your request within 2 to 3 business days.
BigQuery Storage Read API requests
The following limits apply to CreateReadSession
calls using the
Storage Read API:
- Maximum row restriction/filter length: 1 MB
The following limits apply to ReadRows
calls using the
Storage Read API:
ReadRows
calls per minute: 5,000 : When you read data using the Storage Read API, you are limited to 5,000ReadRows
calls per minute, per user, per project.
The following limits apply to all other method calls using the Storage Read API:
- API calls per minute: 1,000 : You are limited to 1,000 Storage Read API calls per minute, per user, per project.
BigQuery Storage Write API requests
Storage Write API is subject to the following quotas and limits.
- Rate limit for creating write streams: 100 per minute per project.
- Calls to
CreateWriteStream
are rate limited. If you reach this limit, retry the operation with exponential backoff. Also, try to space out calls toCreateWriteStream
. The default stream is not subject to this quota. If you don't need data duplication with committed mode, then consider using the default stream.
- Pending bytes: 100 GB per project.
- This limit is the maximum number of bytes that you can write in pending mode before committing the streams.
Concurrent connections: 1,000 per project.
- Maximum number of streams passed to
BatchCommitWriteStreams
: 100. - In pending mode, the
BatchCommitWriteStreams
method takes a list of streams to commit. If you exceed this limit, then the method returns an error. As a workaround, callBatchCommitWriteStreams
multiple times.
- Maximum number of streams passed to
When are quotas refilled?
Daily quotas are replenished at regular intervals throughout the day, reflecting their intent to guide rate limiting behaviors. Intermittent refresh is also done to avoid long disruptions when quota is exhausted. More quota is typically made available within minutes rather than globally replenished once daily.
Troubleshooting
For information about troubleshooting errors related to quota limits, see Troubleshooting BigQuery quota errors.
Capping quota usage
To learn how you can limit usage of a particular resource, up to the limit specified by Google, see Capping usage.