Quotas and limits
This document lists the quotas and limits that apply to BigQuery.
A quota restricts how much of a particular shared Google Cloud resource your Cloud project can use, including hardware, software, and network components.
Quotas are part of a system that does the following:
- Monitors your use or consumption of Google Cloud products and services.
- Restricts your consumption of those resources for reasons including ensuring fairness and reducing spikes in usage.
- Maintains configurations that automatically enforce prescribed restrictions.
- Provides a means to make or request changes to the quota.
When a quota is exceeded, in most cases, the system immediately blocks access to the relevant Google resource, and the task that you're trying to perform fails. In most cases, quotas apply to each Cloud project and are shared across all applications and IP addresses that use that Cloud project.
There are also limits on BigQuery resources. These limits are unrelated to the quota system. Limits cannot be changed unless otherwise stated.
By default, BigQuery quotas and limits apply on a per-project basis. Quotas and limits that apply on a different basis are indicated as such; for example, the maximum number of columns per table, or the maximum number of concurrent API requests per user. Specific policies vary depending on resource availability, user profile, Service Usage history, and other factors, and are subject to change without notice.
Quota replenishment
Daily quotas are replenished at regular intervals throughout the day, reflecting their intent to guide rate limiting behaviors. Intermittent refresh is also done to avoid long disruptions when quota is exhausted. More quota is typically made available within minutes rather than globally replenished once daily.
Request a quota increase
To increase or decrease most quotas, use the Google Cloud console. For more information, see Requesting a higher quota.
For step-by-step guidance through the process of requesting a quota increase in Google Cloud console, click Guide me:
Cap quota usage
To learn how you can limit usage of a particular resource by specifying a smaller quota than the default, see Capping usage.
Required permissions
To view and update your BigQuery quotas in the Google Cloud console, you need the same permissions as for any Google Cloud quota. For more information, see Google Cloud quota permissions.
Troubleshoot
For information about troubleshooting errors related to quotas and limits, see Troubleshooting BigQuery quota errors.
Jobs
Quotas and limits apply to jobs that BigQuery runs on your behalf
whether they are run by using Google Cloud console, the bq
command-line tool, or
programmatically using the REST API or client libraries.
Query jobs
The following quotas apply to query jobs created automatically by
running interactive queries, scheduled queries, and jobs submitted by using the
jobs.query
and query-type jobs.insert
API methods:
Quota | Default | Notes |
---|---|---|
Query usage per day | Unlimited | There is no limit to the number of bytes that can be processed by
queries in a project. View quota in Google Cloud console |
Query usage per day per user | Unlimited | There is no limit to the number of bytes that a user's queries can
process each day. View quota in Google Cloud console |
Cloud SQL federated query cross-region bytes per day | 1 TB | If the
BigQuery query processing location and the
Cloud SQL instance location are different, then your query is a
cross-region
query. Your project can run up to 1 TB in cross-region queries
per day. See
Cloud SQL
federated queries. View quota in Google Cloud console |
Cross-cloud transferred bytes per day | 1 TB |
You can transfer up to 1 TB of data per day from an Amazon S3 bucket or
from Azure Blob Storage. For more information, see
Cross-cloud
transfer from Amazon S3 and Azure.
View quota in Google Cloud console |
The following limits apply to query jobs created automatically by
running interactive queries, scheduled queries, and jobs submitted by using the
jobs.query
and query-type jobs.insert
API methods:
Limit | Default | Notes |
---|---|---|
Maximum number of concurrent interactive queries | 100 queries |
Your project can run up to 100 concurrent interactive queries.
Queries with results that are returned from the
query cache count against
this limit for the duration it takes for BigQuery to
determine that it is a cache hit. Dry-run queries don't count against
this limit. You can specify a dry-run query by using the
--dry_run
flag. For information about strategies to stay within this limit, see
Troubleshooting quota errors.
One approach to mitigating these errors is to enable
query queues
(preview). Query queues
provide a dynamic concurrency limit, and queuing of up to 1,000
interactive queries beyond those running.
|
Maximum number of queued interactive queries | 1,000 queries | If query queues (preview) are enabled, your project can queue up to 1,000 interactive queries. Additional interactive queries that exceed this limit return a quota error. |
Maximum number of concurrent batch queries | 10 queries | Your project can run up to 10 concurrent batch queries. |
Maximum number of queued batch queries | 20,000 queries | Your project can queue up to 20,000 batch queries. Additional batch queries that exceed this limit return a quota error. |
Maximum number of concurrent interactive queries against Cloud Bigtable external data sources | 16 queries | Your project can run up to sixteen concurrent queries against a Bigtable external data source. |
Maximum number of concurrent queries that contain remote functions | 10 queries | You can run up to ten concurrent queries with remote functions per project. |
Maximum number of concurrent multi-statement queries | 1,000 multi-statement queries | Your project can run up to 1,000 concurrent multi-statement queries. |
Maximum number of concurrent legacy SQL queries that contain UDFs | 6 queries | Your project can run up to six concurrent legacy SQL queries with user-defined functions (UDFs). This limit includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent limit for interactive queries. This limit does not apply to GoogleSQL queries. |
Daily query size limit | Unlimited | By default, there is no daily query size limit. However, you can set limits on the amount of data users can query by creating custom quotas to control query usage per day or query usage per day per user. |
Daily destination table update limit | See Maximum number of table operations per day. |
Updates to destination tables in a query job count toward the limit on the
maximum number of table
operations per day for the destination tables. Destination table updates include
append and overwrite operations that are performed by queries that you run by
using the Google Cloud console, using the bq command-line tool, or calling the
jobs.query
and query-type
jobs.insert
API methods.
|
Query/multi-statement query execution-time limit | 6 hours | A query or multi-statement query can execute for up to six hours, and then it fails. However, sometimes queries are retried. A query can be tried up to three times, and each attempt can run for up to six hours. As a result, it's possible for a query to have a total runtime of more than six hours. |
Maximum number of resources referenced per query | 1,000 resources |
A query can reference up to 1,000 total of unique
tables, unique
views, unique
user-defined functions
(UDFs), and unique
table
functions
after full expansion. This limit includes the following:
|
Maximum unresolved legacy SQL query length | 256 KB |
An unresolved legacy SQL query can be up to 256 KB long. If
your query is longer, you receive the following error: The query
is too large.
To stay within this limit, consider replacing large arrays or lists with
query parameters.
|
Maximum unresolved GoogleSQL query length | 1 MB |
An unresolved GoogleSQL query can be up to 1 MB long. If
your query is longer, you receive the following error: The query is too
large.
To stay within this limit, consider replacing large arrays or lists with query
parameters.
|
Maximum resolved legacy and GoogleSQL query length | 12 MB | The limit on resolved query length includes the length of all views and wildcard tables referenced by the query. |
Maximum number of GoogleSQL query parameters | 10,000 parameters | A GoogleSQL query can have up to 10,000 parameters. |
Maximum request size | 10 MB | The request size can be up to 10 MB, including additional properties like query parameters. |
Maximum response size | 10 GB compressed | Sizes vary depending on compression ratios for the data. The actual response size might be significantly larger than 10 GB. The maximum response size is unlimited when writing large query results to a destination table. |
BigQuery Omni maximum query result size | 20 GiB uncompressed | The maximum result size is 20 GiB logical bytes when querying Azure or AWS data. For more information, see BigQuery Omni Limitations. |
Maximum row size | 100 MB | The maximum row size is approximate, because the limit is based on the internal representation of row data. The maximum row size limit is enforced during certain stages of query job execution. |
Maximum columns in a table, query result, or view definition | 10,000 columns | A table, query result, or view definition can have up to 10,000 columns. |
BigQuery Omni query result size | 1 TB | The total query result sizes for a project is 1 TB per day.
For more information, see
BigQuery Omni
limitations. |
Maximum concurrent slots for on-demand pricing | 2,000 slots | With on-demand pricing, your project can have up to 2,000 concurrent slots. BigQuery slots are shared among all queries in a single project. BigQuery might burst beyond this limit to accelerate your queries. To check how many slots you're using, see Monitoring BigQuery using Cloud Monitoring. |
Maximum CPU usage per scanned data for on-demand pricing | 256 CPU seconds per MiB scanned |
With on-demand pricing, your query can use up to approximately 256 CPU seconds
per MiB of scanned data. If your query is too CPU-intensive for the amount of data
being processed, the query fails with a
billingTierLimitExceeded error.
For more information,
see
billingTierLimitExceeded.
|
Multi-statement transaction table mutations | 100 tables | A transaction can mutate data in at most 100 tables. |
Multi-statement transaction partition modifications | 100,000 partition modifications | A transaction can perform at most 100,000 partition modifications. |
Although scheduled queries use features of the BigQuery Data Transfer Service, scheduled queries are not transfers, and are not subject to load job limits.
Export jobs
The following limits apply to jobs that
export data
from BigQuery by using the bq
command-line tool, Google Cloud console,
or the export-type jobs.insert
API method.
Limit | Default | Notes |
---|---|---|
Maximum number of exported bytes per day | 50 TB |
You can export up to 50 TB(Tebibytes) of data per day from a project for
free using the shared slot pool. You can set up a Cloud Monitoring
alert policy that provides notification of the number of bytes exported.
To export more than 50 TB(Tebibytes) of data per day, do one of the
following:
|
Maximum number of exports per day | 100,000 exports |
You can run up to 100,000 exports per day in a project.
To run more than 100,000 exports per day, do one of the following:
|
Maximum table size exported to a single file | 1 GB | You can export up to 1 GB of table data to a single file. To export more than 1 GB of data, use a wildcard to export the data into multiple files. When you export data to multiple files, the size of the files varies. In some cases, the size of the output files is more than 1 GB. |
Wildcard URIs per export | 500 URIs | An export can have up to 500 wildcard URIs. |
For more information about viewing your current export job usage, see View current quota usage.
Load jobs
The following limits apply when you
load data
into BigQuery, using the
Google Cloud console, the bq
command-line tool, or the load-type
jobs.insert
API method.
Limit | Default | Notes |
---|---|---|
Load jobs per table per day | Load jobs, including failed load jobs, count toward the limit on the number of table operations per day for the destination table. For information about limits on the number of table operations per day for standard tables and partitioned tables, see Tables. | |
Load jobs per day | 100,000 jobs | Your project can run up to 100,000 load jobs per day. Failed load jobs count toward this limit. |
Maximum columns per table | 10,000 columns | A table can have up to 10,000 columns. |
Maximum size per load job | 15 TB | The total size for all of your CSV, JSON, Avro, Parquet, and ORC input files can be up to 15 TB. |
Maximum number of source URIs in job configuration | 10,000 URIs | A job configuration can have up to 10,000 source URIs. |
Maximum number of files per load job | 10,000,000 files | A load job can have up to 10 million total files, including all files matching all wildcard URIs. |
Maximum number of files in the source Cloud Storage bucket | Approximately 60,000,000 files | A load job can read from a Cloud Storage bucket containing up to approximately 60,000,000 files. |
Load job execution-time limit | 6 hours | A load job fails if it executes for longer than six hours. |
Avro: Maximum size for file data blocks | 16 MB | The size limit for Avro file data blocks is 16 MB. |
CSV: Maximum cell size | 100 MB | CSV cells can be up to 100 MB in size. |
CSV: Maximum row size | 100 MB | CSV rows can be up to 100 MB in size. |
CSV: Maximum file size - compressed | 4 GB | The size limit for a compressed CSV file is 4 GB. |
CSV: Maximum file size - uncompressed | 5 TB | The size limit for an uncompressed CSV file is 5 TB. |
JSON: Maximum row size | 100 MB | JSON rows can be up to 100 MB in size. |
JSON: Maximum file size - compressed | 4 GB | The size limit for a compressed JSON file is 4 GB. |
JSON: Maximum file size - uncompressed | 5 TB | The size limit for an uncompressed JSON file is 5 TB. |
If you regularly exceed the load job limits due to frequent updates, consider streaming data into BigQuery instead.
For information on viewing your current load job usage, see View current quota usage.
BigQuery Data Transfer Service load job quota considerations
Load jobs created by BigQuery Data Transfer Service transfers are included in
BigQuery's quotas on load jobs. It's important to consider
how many transfers you enable in each project to prevent transfers and other
load jobs from producing quotaExceeded
errors.
You can use the following equation to estimate how many load jobs are required by your transfers:
Number of daily jobs = Number of transfers x Number of tables x
Schedule frequency x Refresh window
Where:
Number of transfers
is the number of transfer configurations you enable in your project.Number of tables
is the number of tables created by each specific transfer type. The number of tables varies by transfer type:- Campaign Manager transfers create approximately 25 tables.
- Google Ads transfers create approximately 60 tables.
- Google Ad Manager transfers create approximately 40 tables.
- Google Play transfers create approximately 25 tables.
- Search Ads 360 transfers create approximately 50 tables.
- YouTube transfers create approximately 50 tables.
Schedule frequency
describes how often the transfer runs. Transfer run schedules are provided for each transfer type:Refresh window
is the number of days to include in the data transfer. If you enter 1, there is no daily backfill.
Copy jobs
The following limits apply to BigQuery jobs for
copying tables, including jobs
that create a copy, clone, or snapshot of a standard table, table clone, or
table snapshot.
The limits apply to jobs created by using the Google Cloud console, the
bq
command-line tool, or the copy-type
jobs.insert
method.
Copy jobs count toward these limits whether they succeed or fail.
Limit | Default | Notes |
---|---|---|
Copy jobs per destination table per day | See Table operations per day. | |
Copy jobs per day | 100,000 jobs | Your project can run up to 100,000 copy jobs per day. |
Cross-region copy jobs per destination table per day | 100 jobs | Your project can run up to 100 cross-region copy jobs for a destination table per day. |
Cross-region copy jobs per day | 2,000 jobs | Your project can run up to 2,000 cross-region copy jobs per day. |
Number of source tables to copy | 1,200 source tables | You can copy from up to 1,200 source tables per copy job. |
For information on viewing your current copy job usage, see Copy jobs - View current quota usage.
The following limits apply to copying datasets:
Limit | Default | Notes |
---|---|---|
Maximum number of tables in the source dataset | 20,000 tables | A source dataset can have up to 20,000 tables. |
Maximum number of tables that can be copied per run to a destination dataset in the same region | 20,000 tables | Your project can copy 20,000 tables per run to a destination dataset that is in the same region. |
Maximum number of tables that can be copied per run to a destination dataset in a different region | 1,000 tables | Your project can copy 1,000 tables per run to a destination dataset that is in a different region. For example, if you configure a cross-region copy of a dataset with 8,000 tables in it, then BigQuery Data Transfer Service automatically creates eight runs in a sequential manner. The first run copies 1,000 tables. Twenty-four hours later, the second run copies 1,000 tables. This process continues until all tables in the dataset are copied, up to the maximum of 20,000 tables per dataset. |
Reservations
The following quotas apply to reservations:
Quota | Default | Notes |
---|---|---|
Total number of slots for the EU region | 2,000 slots |
The maximum number of BigQuery slots you can purchase
in the EU multi-region by using the Google Cloud console.
View quotas in Google Cloud console |
Total number of slots for the US region | 4,000 slots |
The maximum number of BigQuery slots you can purchase
in the US multi-region by using the Google Cloud console.
View quotas in Google Cloud console |
Total number of slots for the following regions: asia-northeast1, asia-northeast3, australia-southeast1, europe-west2, and northamerica-northeast1 | 1000 slots |
The maximum number of BigQuery slots you can
purchase in each of the listed regions by using the Google Cloud console.
View quotas in Google Cloud console |
Total number of slots for Omni regions (aws-us-east-1 and azure-eastus2) | 100 slots |
The maximum number of BigQuery slots you can purchase
in the Omni regions by using the Google Cloud console.
View quotas in Google Cloud console |
Total number of slots for all other regions | 500 slots |
The maximum number of BigQuery slots you can
purchase in each other region by using the Google Cloud console.
View quotas in Google Cloud console |
The following limits apply to reservations:
Limit | Value | Notes |
---|---|---|
Number of administration projects for slot reservations | 5 projects per organization | The maximum number of projects within an organization that can contain an active commitment for slots for a given location / region. |
Datasets
The following limits apply to BigQuery datasets:
Limit | Default | Notes |
---|---|---|
Maximum number of datasets | Unlimited | There is no limit on the number of datasets that a project can have. |
Number of tables per dataset | Unlimited | When you use an API call, enumeration performance slows as you approach 50,000 tables in a dataset. The Google Cloud console can display up to 50,000 tables for each dataset. |
Number of authorized resources in a dataset's access control list | 2,500 resources | A dataset's access control list can have up to 2,500 total authorized resources, including authorized views, authorized datasets, and authorized functions. If you exceed this limit due to a large number of authorized views, consider grouping the views into authorized datasets. |
Number of dataset update operations per dataset per 10 seconds | 5 operations |
Your project can make up to five dataset update operations every 10 seconds.
The dataset update limit includes all metadata update operations
performed by the following:
|
Maximum length of a dataset description | 16,384 characters | When you add a description to a dataset, the text can be at most 16,384 characters. |
Tables
All tables
The following limits apply to all BigQuery tables.
Limit | Default | Notes |
---|---|---|
Maximum length of a column description | 1,024 characters | When you add a description to a column, the text can be at most 1,024 characters. |
Maximum depth of nested records | 15 levels |
Columns of type RECORD can contain nested
RECORD types, also called
child records. The maximum nested depth limit is 15 levels.
This limit is independent of whether the records are scalar or
array-based (repeated).
|
Standard tables
The following limits apply to BigQuery standard (built-in) tables:
Limit | Default | Notes |
---|---|---|
Table modifications per day | 1,500 modifications |
Your project can make up to 1,500 table modifications per table per day,
whether the modification appends data, updates data, or truncates the
table. This limit includes the combined total of all
load jobs,
copy jobs, and
query jobs
that append to or overwrite a destination table or that use a
DML
DML statements count toward the number of table modifications per day, but are not constrained by this limit. For more information about DML limits, see Data manipulation language statements. |
Maximum rate of table metadata update operations per table | 5 operations per 10 seconds |
Your project can make up to five table metadata update operations per 10 seconds
per table. This limit applies to all table metadata update operations,
performed by the following:
If you exceed this limit, you get an error message like
To identify the operations that count toward this limit, you can Inspect your logs. |
Maximum number of columns per table | 10,000 columns | Each table, query result, or view definition can have up to 10,000 columns. |
External tables
The following limits apply to BigQuery tables with data stored on Cloud Storage in Parquet, ORC, Avro, CSV, or JSON format:
Limit | Default | Notes |
---|---|---|
Maximum number of source URIs per external table | 10,000 URIs | Each external table can have up to 10,000 source URIs. |
Maximum number of files per external table | 10,000,000 files | An external table can have up to 10 million files, including all files matching all wildcard URIs. |
Maximum size of stored data on Cloud Storage per external table | 600 TB | An external table can have up to 600 terabytes across all input files. This limit applies to the file sizes as stored on Cloud Storage; this size is not the same as the size used in the query pricing formula. For externally partitioned tables, the limit is applied after partition pruning. |
Maximum number of files in the source Cloud Storage bucket | Approximately 60,000,000 files | An external table can reference a Cloud Storage bucket containing up to approximately 60,000,000 files. For externally partitioned tables, this limit is applied before partition pruning. |
Partitioned tables
The following limits apply to BigQuery partitioned tables.
Partition limits apply to the combined total of all
load jobs,
copy jobs, and
query jobs
that append to or overwrite a destination partition, or that use a DML
DELETE
, INSERT
, MERGE
,
TRUNCATE TABLE
, or UPDATE
statement to affect data in a table.
DML statements count toward partition limits, but aren't limited by them. For more information about DML limits, see Data manipulation language statements.
A single job can affect multiple partitions. For example, a DML statement can update data in multiple partitions (for both ingestion-time and partitioned tables). Query jobs and load jobs can also write to multiple partitions, but only for partitioned tables.
BigQuery uses the number of partitions affected by a job when determining how much of the limit the job consumes. Streaming inserts do not affect this limit.
For information about strategies to stay within the limits for partitioned tables, see Troubleshooting quota errors.
Limit | Default | Notes |
---|---|---|
Number of partitions per partitioned table | 4,000 partitions | Each partitioned table can have up to 4,000 partitions. If you exceed this limit, consider using clustering in addition to, or instead of, partitioning. |
Number of partitions modified by a single job | 4,000 partitions | Each job operation (query or load) can affect up to 4,000 partitions. BigQuery rejects any query or load job that attempts to modify more than 4,000 partitions. |
Number of partition modifications per ingestion-time partitioned table per day | 5,000 modifications | Your project can make up to 5,000 partition modifications per day, whether the modification appends data, updates data, or truncates an ingestion-time partitioned table. DML statements count toward the number of partition modifications per day, but are not constrained by this limit. For more information about DML limits, see Data manipulation language statements. |
Number of partition modifications per column-partitioned table per day | 30,000 modifications |
Your project can make up to 30,000 partition modifications per day for a column-partitioned table. |
Number of modifications per 10 seconds per table | 50 modifications | Your project can run up to 50 modifications per partitioned table every 10 seconds. |
Number of possible ranges for range partitioning | 10,000 ranges | A range-partitioned table can have up to 10,000 possible ranges. This limit applies to the partition specification when you create the table. After you create the table, the limit also applies to the actual number of partitions. |
Table snapshots
The following limits apply to BigQuery table snapshots:
Limit | Default | Notes |
---|---|---|
Maximum number of concurrent table snapshot jobs | 100 jobs | Your project can run up to 100 concurrent table snapshot jobs. |
Maximum number of table snapshot jobs per day | 50,000 jobs | Your project can run up to 50,000 table snapshot jobs per day. |
Maximum number of table snapshot jobs per table per day | 50 jobs | Your project can run up to 50 table snapshot jobs per table per day. |
Maximum number of metadata updates per table snapshot per 10 seconds | 5 updates | Your project can update a table snapshot's metadata up to five times every 10 seconds. |
Views
The following quotas and limits apply to views and materialized views.
Logical views
The following limits apply to BigQuery standard views:
Limit | Default | Notes |
---|---|---|
Maximum number of nested view levels | 16 levels |
BigQuery supports up to 16 levels of nested views. If
there are more than 16 levels, an INVALID_INPUT error is
returned.
|
Maximum length of a GoogleSQL query used to define a view | 256 K characters | The text of a GoogleSQL query that defines a view can be up to 256 K characters. |
Maximum number of authorized views per dataset | See Datasets. |
Materialized views
The following limits apply to BigQuery materialized views:
Limit | Default | Notes |
---|---|---|
Base table references (same dataset) | 20 materialized views | Each base table can be referenced by up to 20 materialized views from the same dataset. |
Base table references (same project) | 100 materialized views | Each base table can be referenced by up to 100 materialized views from the same project. |
Base table references (entire organization) | 500 materialized views | Each base table can be referenced by up to 500 materialized views from the entire organization. |
Maximum number of authorized views per dataset | See Datasets. |
Indexes
The following limits apply to BigQuery indexes:
Limit | Default | Notes |
---|---|---|
Number of CREATE INDEX DDL statements per project per region per day | 500 operations | Your project can issue up to 500 CREATE INDEX DDL operations every day within a region. |
Number of index DDL statements per table per day | 20 operations | Your project can issue up to 20 CREATE INDEX or DROP INDEX DDL operations per table per day. |
Maximum total size of table data per organization allowed for index creation that does not run in a reservation | 100 TB in multi-regions; 20 TB in all other regions |
You can create an index for a table if the overall size of
tables with indexes in your organization is below your region’s limit:
100 TB for the US and EU multi-regions, and
20 TB for all other regions. If your index-management jobs run in
your
own reservation, then this limit does not apply.
|
Routines
The following quotas and limits apply to routines.
User-defined functions
The following limits apply to both temporary and persistent user-defined functions (UDFs) in GoogleSQL queries.
Limit | Default | Notes |
---|---|---|
Maximum output per row | 5 MB | The maximum amount of data that your JavaScript UDF can output when processing a single row is approximately 5 MB. |
Maximum concurrent legacy SQL queries with Javascript UDFs | 6 queries | Your project can have up to six concurrent legacy SQL queries that contain UDFs in JavaScript. This limit includes both interactive and batch queries. Interactive queries that contain UDFs also count toward the concurrent rate limit for interactive queries. This limit does not apply to GoogleSQL queries. |
Maximum JavaScript UDF resources per query | 50 resources | A query job can have up to 50 JavaScript UDF resources, such as inline code blobs or external files. |
Maximum size of inline code blob | 32 KB | An inline code blob in a UDF can be up to 32 KB in size. |
Maximum size of each external code resource | 1 MB | The maximum size of each JavaScript code resource is one MB. |
The following limits apply to persistent UDFs:
Limit | Default | Notes |
---|---|---|
Maximum length of a UDF name | 256 characters | A UDF name can be up to 256 characters long. |
Maximum number of arguments | 256 arguments | A UDF can have up to 256 arguments. |
Maximum length of an argument name | 128 characters | A UDF argument name can be up to 128 characters long. |
Maximum depth of a UDF reference chain | 16 references | A UDF reference chain can be up to 16 references deep. |
Maximum depth of a STRUCT type argument or output
|
15 levels |
A STRUCT type UDF argument or output can be up to
15 levels deep.
|
Maximum number of fields in STRUCT type arguments or output
per UDF
|
1,024 fields |
A UDF can have up to 1024 fields in STRUCT type arguments
and output.
|
Maximum number of JavaScript libraries in a CREATE FUNCTION
statement
|
50 libraries |
A CREATE FUNCTION statement can have up to 50 JavaScript
libraries.
|
Maximum length of included JavaScript library paths | 5,000 characters | The path for a JavaScript library included in a UDF can be up to 5,000 characters long. |
Maximum update rate per UDF per 10 seconds | 5 updates | Your project can update a UDF up to five times every 10 seconds. |
Maximum number of authorized UDFs per dataset | See Datasets. |
Remote functions
The following limits apply to remote functions in BigQuery.
Limit | Default | Notes |
---|---|---|
Maximum number of concurrent queries that contain remote functions | 10 queries | You can run up to ten concurrent queries with remote functions per project. |
Maximum input size | 5 MB | The maximum total size of all input arguments from a single row is 5 MB. |
HTTP response size limit (Cloud Functions 1st gen) | 10 MB | HTTP response body from your Cloud Function 1st gen is up to 10 MB. Exceeding this value causes query failures. |
HTTP response size limit (Cloud Functions 2nd gen or Cloud Run) | 15 MB | HTTP response body from your Cloud Function 2nd gen or Cloud Run is up to 15 MB. Exceeding this value causes query failures. |
Max HTTP invocation time limit (Cloud Functions 1st gen | 9 minutes | You can set your own time limit for your Cloud Function 1st gen for an individual HTTP invocation, but the max time limit is 9 minutes. Exceeding the time limit set for your Cloud Function 1st gen may cause HTTP invocation failures and query failure after a limited number of retries. |
HTTP invocation time limit (Cloud Functions 2nd gen or Cloud Run) | 20 minutes | The time limit for an individual HTTP invocation to your Cloud Function 2nd gen or Cloud Run. Exceeding this value may cause HTTP invocation failures and query failure after a limited number of retries. |
Table functions
The following limits apply to BigQuery table functions:
Limit | Default | Notes |
---|---|---|
Maximum length of a table function name | 256 characters | The name of a table function can be up to 256 characters in length. |
Maximum length of an argument name | 128 characters | The name of a table function argument can be up to 128 characters in length. |
Maximum number of arguments | 256 arguments | A table function can have up to 256 arguments. |
Maximum depth of a table function reference chain | 16 references | A table function reference chain can be up to 16 references deep. |
Maximum depth of argument or output of type STRUCT
|
15 levels |
A STRUCT argument for a table function can be up to 15
levels deep. Similarly, a STRUCT record in a table
function's output can be up to 15 levels deep.
|
Maximum number of fields in argument or return table of type
STRUCT per table function
|
1,024 fields |
A STRUCT argument for
a table function can have up to 1,024 fields.
Similarly, a STRUCT
record in a table function's output can have up to 1,024 fields.
|
Maximum number of columns in return table | 1,024 columns | A table returned by a table function can have up to 1,024 columns. |
Maximum length of return table column names | 128 characters | Column names in returned tables can be up to 128 characters long. |
Maximum number of updates per table function per 10 seconds | 5 updates | Your project can update a table function up to five times every 10 seconds. |
Stored procedures for Apache Spark
The following limits apply for BigQuery stored procedures for Apache Spark:
Limit | Default | Notes |
---|---|---|
Maximum number of concurrent stored procedure queries | 50 | You can run up to 50 concurrent stored procedure queries for each project. |
Maximum number of concurrent CPUs | 12,000 | You can use up to 12,000 concurrent CPUs for each project.
You can use up to 2,400 concurrent CPUs for each location for each project, except in the following locations:
In these locations, you can use up to 500 concurrent CPUs for each location for each project. If you run concurrent queries in a multi-region location and a single region location that is in the same geographic area, then your queries might consume the same concurrent CPU quota. |
Maximum total size of standard persistent disks | 204.8 TB | You can use up to 204.8 TB standard persistent disks for each location for each project. If you run concurrent queries in a multi-region location and a single region location that is in the same geographic area, then your queries might consume the same standard persistent disk quota. |
Data manipulation language
The following limits apply for BigQuery data manipulation language (DML) statements:
Limit | Default | Notes |
---|---|---|
DML statements per day | Unlimited |
DML statements count toward the number of
table modifications per day
(or the number of
partitioned table
modifications per day for partitioned tables).
However, the number of DML statements your project can
run per day is unlimited and is not constrained by the table
modifications per day quota (or partitioned modifications quota).
After the daily limit for table modifications (or partitioned
table modifications) is used up, you'll receive errors for non-DML
modifications like
load jobs or
copy jobs,
but you can continue to execute DML statements without
getting errors. For example, assume you have an ingestion-time partitioned table named mytable . If you run 3,000 copy jobs that append data
to mytable$20210720 and 2,000 query DML jobs that use
INSERT to append data to mytable$20210720 , you'll reach
the daily limit for partition modifications. Once you reach this limit,
any further copy jobs will fail, but query jobs based on DML, such as
DELETE , INSERT , MERGE ,
TRUNCATE TABLE , or UPDATE statements will
continue to succeed. DML statements have the following
limitations
to be aware of.
|
Concurrent mutating DML statements per table | 2 statements |
BigQuery runs up to two concurrent mutating DML
statements (UPDATE , DELETE , and
MERGE ) for each table. Additional mutating DML statements
for a table are queued.
|
Queued mutating DML statements per table | 20 statements | A table can have up to 20 mutating DML statements in the queue waiting to run. If you submit additional mutating DML statements for the table, then those statements fail. |
Maximum time in queue for DML statement | 6 hours | An interactive priority DML statement can wait in the queue for up to six hours. If the statement has not run after six hours, it fails. |
For more information about mutating DML statements, see
INSERT
DML concurrency and
UPDATE, DELETE, MERGE
DML concurrency.
Multi-statement queries
The following limits apply to multi-statement queries in BigQuery.
Limit | Default | Notes |
---|---|---|
Cumulative time limit | 24 hours | The cumulative time limit for a multi-statement query is 24 hours. |
Statement time limit | 6 hours | The time limit for an individual statement within a multi-statement query is 6 hours. |
Recursive CTEs in queries
The following limits apply to recursive common table expressions (CTEs) in BigQuery.
Limit | Default | Notes |
---|---|---|
Iteration limit | 500 iterations | The recursive CTE can execute this number of iterations. If this limit is exceeded, an error is produced. To work around iteration limits, see Troubleshoot iteration limit errors. |
Row-level security
The following limits apply for BigQuery row-level access policies:
Limit | Default | Notes |
---|---|---|
Maximum number of row access policies per table | 100 policies | A table can have up to 100 row access policies. |
Maximum number of row access policies per query | 100 policies | A query can access up to a total of 100 row access policies. |
Maximum number of CREATE / DROP DDL statements
per policy per 10 seconds |
5 statements |
Your project can make up to five CREATE or DROP statements
per row access policy resource every 10 seconds.
|
DROP ALL ROW ACCESS POLICIES statements per table per
10 seconds |
5 statements |
Your project can make up to five DROP ALL ROW ACCESS POLICIES
statements per table every 10 seconds.
|
Data policies
The following limits apply for column-level dynamic data masking:
Limit | Default | Notes |
---|---|---|
Maximum number of data policies per policy tag. | 3 |
BI Engine
The following limits apply to BigQuery BI Engine.
Limit | Default | Notes |
---|---|---|
Maximum reservation size per project per location (SQL Interface) | 250 GB | Applies when using BI Engine with business intelligence
(BI) tools other than Looker Studio.
You can request an increase of the maximum reservation capacity for your projects. Reservation increases are available in most regions, and might take from 3 days to one week to process. |
Maximum reservation size per project per location (Looker Studio) | 100 GB | Applies when using BI Engine with Looker Studio. This limit does not affect the size of the tables that you query as BI Engine loads in-memory only the columns used in your queries, not the entire table. |
Maximum data model size per table (Looker Studio) | 10 GB | Applies when using BI Engine with Looker Studio. If you have a 100 GB reservation per project per location, BI Engine limits the reservation per table to 10 GB. The rest of the available reservation is used for other tables in the project. |
Maximum partitions per table (Looker Studio) | 500 partitions | BI Engine for Looker Studio supports up to a maximum of 500 partitions per table. |
Maximum rows per query (Looker Studio) | 150 million | BI Engine for Looker Studio supports up to 150 million rows of queried data, depending on query complexity. |
Analytics Hub
The following limits apply to Analytics Hub:
Limit | Default | Notes |
---|---|---|
Maximum number of data exchanges per project | 500 exchanges | You can create up to 500 data exchanges in a project. |
Maximum number of listings per data exchange | 1,000 listings | You can create up to 1,000 listings in a data exchange. |
Maximum number of linked datasets per shared dataset | 1,000 linked datasets | All Analytics Hub subscribers, combined, can have a maximum of 1,000 linked datasets per shared dataset. |
API quotas and limits
These quotas and limits apply to BigQuery API requests.
BigQuery API
The following quotas apply to BigQuery API (core) requests:
Quota | Default | Notes |
---|---|---|
Requests per day | Unlimited |
Your project can make an unlimited number of BigQuery API requests per
day.
View quota in Google Cloud console |
Maximum
tabledata.list bytes per minute
|
7.5 GB in multi-regions; 3.7 GB in all other regions |
Your project can return a maximum of 7.5 GB of table row data per
minute via tabledata.list in the us and
eu multi-regions, and 3.7 GB of table row data per minute
in all other regions. This quota applies to the project that contains
the table being read. Other APIs including
jobs.getQueryResults and
fetching results from
jobs.query and
jobs.insert can also consume this quota.
View quota in Google Cloud console |
The following limits apply to BigQuery API (core) requests:
Limit | Default | Notes |
---|---|---|
Maximum number of API requests per second per user per method | 100 requests | A user can make up to 100 API requests per second to an API method. If a user makes more than 100 requests per second to a method, then throttling can occur. This limit does not apply to streaming inserts. |
Maximum number of concurrent API requests per user | 300 requests | If a user makes more than 300 concurrent requests, throttling can occur. This limit does not apply to streaming inserts. |
Maximum request header size | 16 KiB |
Your BigQuery API request can be up to 16 KiB, including the request
URL and all headers. This limit does not apply to the request body, such
as in a POST request.
|
Maximum
jobs.get requests per second
|
1,000 requests |
Your project can make up to 1,000
jobs.get
requests per second.
|
Maximum
jobs.query response size
|
20 MB |
By default, there is no maximum row count for the number of rows of
data returned by jobs.query per page of results. However,
you are limited to the 20-MB maximum response size. You can alter the
number of rows to return by using the maxResults
parameter.
|
Maximum
projects.list requests per second
|
2 requests |
Your project can make up to two
projects.list requests per second.
|
Maximum number of
tabledata.list requests per second
|
1,000 requests |
Your project can make up to 1,000 tabledata.list
requests per second.
|
Maximum rows returned by
tabledata.list requests per second
|
150,000 rows |
Your project can return up to 150,000 rows per second by using
tabledata.list requests. This limit applies to the project
that contains the table being read.
|
Maximum rows per
tabledata.list response
|
100,000 rows |
A tabledata.list call can return up to 100,000 table rows.
For more information, see
Paging through results
using the API.
|
Maximum
tables.insert requests per second
|
10 requests |
Your project can make up to 10 tables.insert requests per second.
The tables.insert method creates a new,
empty table in a dataset. The limit includes SQL statements that create
tables, such as
CREATE TABLE and
queries that write results to destination tables.
|
BigQuery Connection API
The following quotas apply to BigQuery Connection API requests:
Quota | Default | Notes |
---|---|---|
Read requests per minute | 1,000 requests per minute |
Your project can make up to 1,000 requests per minute to
BigQuery Connection API methods that read connection data.
View quota in Google Cloud console |
Write requests per minute | 100 requests per minute |
Your project can make up to 100 requests per minute to BigQuery Connection API
methods that create or update connections.
View quota in Google Cloud console |
BigQuery Migration API
The following limits apply to the BigQuery Migration API:
Limit | Default | Notes |
---|---|---|
Individual file size for batch SQL translation | 10 MB |
Each individual source and metadata file can be up to 10 MB.
This limit does not apply to the metadata zip file produced by the
dwh-migration-dumper command-line tool.
|
Total size of source files for batch SQL translation | 1 GB | The total size of all input files uploaded to Cloud Storage can be up to 1 GB. This includes all source files, and all metadata files if you choose to include them. |
Input string size for interactive SQL translation | 1 MB | The string that you enter for interactive SQL translation must not exceed 1 MB. |
Maximum configuration file size for interactive SQL translation | 50 MB |
Individual metadata files (compressed) and YAML config files in
Cloud Storage must not exceed 50 MB. If the file size exceeds 50 MB,
the interactive translator skips that configuration file during
translation and produces an error message. One method to reduce the
metadata file size is to use the —database or –schema
flags to filter on databases when you generate the metadata.
|
The following quotas apply to the BigQuery Migration API. The following default values apply in most cases. The defaults for your project might be different:
Quota | Default | Notes |
---|---|---|
EDWMigration Service List Requests per minute EDWMigration Service List Requests per minute per user |
12,000 requests 2,500 requests |
Your project can make up to 12,000 Migration API List requests per minute. Each user can make up to 2,500 Migration API List requests per minute. View quotas in Google Cloud console |
EDWMigration Service Get Requests per minute EDWMigration Service Get Requests per minute per user |
25,000 requests 2,500 requests |
Your project can make up to 25,000 Migration API Get requests per minute. Each user can make up to 2,500 Migration API Get requests per minute. View quotas in Google Cloud console |
EDWMigration Service Other Requests per minute EDWMigration Service Other Requests per minute per user |
25 requests 5 requests |
Your project can make up to 25 other Migration API requests per minute. Each user can make up to 5 other Migration API requests per minute. View quotas in Google Cloud console |
Interactive SQL translation requests per minute Interactive SQL translation requests per minute per user |
200 requests 50 requests |
Your project can make up to 200 SQL translation service requests per minute. Each user can make up to 50 other SQL translation service requests per minute. View quotas in Google Cloud console |
BigQuery Reservation API
The following quotas apply to BigQuery Reservation API requests:
Quota | Default | Notes |
---|---|---|
Requests per minute per region | 100 requests |
Your project can make a total of up to 100 calls to BigQuery Reservation API
methods per minute per region.
View quotas in Google Cloud console |
Number of SearchAllAssignments calls per minute per region
|
100 requests |
Your project can make up to 100 calls to the
SearchAllAssignments method per minute per region.
View quotas in Google Cloud console |
Requests for SearchAllAssignments per minute per
region per user
|
10 requests |
Each user can make up to 10 calls to the
SearchAllAssignments method per minute per region.
View quotas in Google Cloud console (In the Google Cloud console search results, search for per user.) |
BigQuery Data Policy API
The following limits apply for the Data Policy API (preview):
Limit | Default | Notes |
---|---|---|
Maximum number of
dataPolicy.list
calls.
|
400 requests per minute per project 600 requests per minute per organization |
|
Maximum number of dataPolicy.testIamPermissions calls.
|
400 requests per minute per project 600 requests per minute per organization |
|
Maximum number of read requests. |
1200 requests per minute per project 1800 requests per minute per organization |
This includes calls to
dataPolicy.get
and
dataPolicy.getIamPolicy .
|
Maximum number of write requests. |
600 requests per minute per project 900 requests per minute per organization |
This includes calls to: |
IAM API
The following quotas apply when you use Identity and Access Management functionality in BigQuery to retrieve and set IAM policies, and to test IAM permissions.
Quota | Default | Notes |
---|---|---|
IamPolicy requests per minute | 3,000 requests |
Your project can make up to 3,000 IAM requests per second.
View quota in Google Cloud console |
IamPolicy requests minute per user | 1,500 requests |
Each user can make up to 1,500 IAM requests per minute
per project.
View quota in Google Cloud console |
Storage Read API
The following quotas apply to BigQuery Storage Read API requests:
Quota | Default | Notes |
---|---|---|
Read data plane requests per minute per user | 25,000 requests |
Each user can make up to 25,000 ReadRows calls per minute
per project.
View quota in Google Cloud console |
Read control plane requests per minute per user | 5,000 requests |
Each user can make up to 5,000 Storage Read API metadata
operation calls per minute per project. The metadata calls include the
CreateReadSession and SplitReadStream methods.
View quota in Google Cloud console |
The following limits apply to BigQuery Storage Read API requests:
Limit | Default | Notes |
---|---|---|
Maximum row/filter length | 1 MB |
When you use the Storage Read API
CreateReadSession call, you are limited to a maximum length
of 1 MB for each row or filter.
|
Maximum serialized data size | 128 MB |
When you use the Storage Read API ReadRows
call, the serialized representation of the data in an individual
ReadRowsResponse message cannot be larger than 128 MB.
|
Maximum concurrent connections | 2,000 in multi-regions; 400 in regions |
You can open a maximum of 2,000 concurrent ReadRows
connections per project in the us and eu
multi-regions, and 400 concurrent ReadRows connections in
other regions. In some cases you may be limited to fewer concurrent
connections than this limit.
|
Storage Write API
The following quotas apply to Storage Write API requests. The following quotas can be applied at the folder level. These quotas are then aggregated and shared across all child projects. To enable this configuration, contact Cloud Customer Care.
If you plan to request a higher quota limit, include the quota error message in your request to expedite processing.
Quota | Default | Notes |
---|---|---|
Concurrent connections | 10,000 in multi-regions; 1000 in regions |
The concurrent connections quota is based on the client project that initiates the Storage Write API request, not the project containing the BigQuery dataset resource. The initiating project is the project associated with the API key or the service account. Your project can operate on 10,000 concurrent connections in the
You can view usage quota and limits metrics for your projects in
Cloud Monitoring. Select the concurrent connections limit name based on your region. The options are |
Throughput | 3 GB per second throughput in multi-regions; 300 MB per second in regions |
You can stream up to 3 GBps in the us and
eu multi-regions, and 300 MBps in other regions per project.
View quota in Google Cloud console You can view usage quota and limits metrics for your projects in
Cloud Monitoring. Select the throughput limit name based on your region. The options are |
CreateWriteStream requests
|
30,000 streams every 4 hours, per project |
You can call CreateWriteStream up to 30,000 times per 4
hours per project. Consider using the
default stream
if you don't really need exactly once semantics.
View quota in Google Cloud console |
Pending stream bytes | 10 TB in multi-regions; 1 TB in regions |
For every commit that you trigger, you can commit up to 10 TB in
the us and eu multi-regions, and
1 TB in other regions.
View quota in Google Cloud console |
The following limits apply to Storage Write API requests:
Limit | Default | Notes |
---|---|---|
Batch commits | 10,000 streams per table |
You can commit up to 10,000 streams in each
BatchCommitWriteStream call.
|
AppendRows
request size
|
10 MB | The maximum request size is 10 MB. |
Streaming inserts
The following quotas and limits apply when you stream data into
BigQuery by using the
legacy streaming API.
For information about strategies to stay within these limits, see
Troubleshooting quota errors.
If you exceed these quotas, you get quotaExceeded
errors.
Limit | Default | Notes |
---|---|---|
Maximum bytes per second per project in the us and eu
multi-regions
|
1 GB per second |
Your project can stream up to 1 GB per second. This quota is cumulative within a given multi-region. In other words, the sum of bytes per second streamed to all tables for a given project within a multi-region is limited to 1 GB.
Exceeding this limit causes If necessary, you can request a quota increase by contacting Cloud Customer Care. Request any increase as early as possible, at minimum two weeks before you need it. Quota increase takes time to become available, especially in the case of a significant increase. |
Maximum bytes per second per project in all other locations | 300 MB per second |
Your project can stream up to 300 MB per second in all locations
except the
Exceeding this limit causes If necessary, you can request a quota increase by contacting Cloud Customer Care. Request any increase as early as possible, at minimum two weeks before you need it. Quota increase takes time to become available, especially in the case of a significant increase. |
Maximum row size | 10 MB |
Exceeding this value causes invalid errors.
|
HTTP request size limit | 10 MB |
Exceeding this value causes Internally the request is translated from HTTP JSON into an internal data structure. The translated data structure has its own enforced size limit. It's hard to predict the size of the resulting internal data structure, but if you keep your HTTP requests to 10 MB or less, the chance of hitting the internal limit is low. |
Maximum rows per request | 50,000 rows | A maximum of 500 rows is recommended. Batching can increase performance and throughput to a point, but at the cost of per-request latency. Too few rows per request and the overhead of each request can make ingestion inefficient. Too many rows per request and the throughput can drop. Experiment with representative data (schema and data sizes) to determine the ideal batch size for your data. |
insertId field length
|
128 characters |
Exceeding this value causes invalid errors.
|
For additional streaming quota, see Request a quota increase.