Pricing

BigQuery offers scalable, flexible pricing options to help fit your project and budget. BigQuery charges for data storage, streaming inserts, and for querying data, but loading and exporting data are free of charge.

BigQuery pricing is based on a flat rate for storage and a usage rate for queries. Project storage usage and query usage are calculated in gigabytes (GB), where 1GB is 230 bytes. This unit of measurement is also known as a gibibyte (GiB). Similarly, 1TB is 240 bytes (1024 GBs). For more information on storage and query pricing, see Google Cloud Platform SKUs.

Each project you create has a billing account attached to it. Any charges incurred by jobs run in the project are billed to the attached billing account. This is true even if the project is shared with others outside your organization. BigQuery storage costs are also billed to the attached billing account.

BigQuery provides cost control mechanisms that enable you to cap your daily costs to an amount that you choose. For more information, see Creating Custom Cost Controls.

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Pricing summary

The following table summarizes BigQuery pricing. BigQuery's Quotas and Limits apply to these operations.

US and EU multi-region locations

Operation Pricing Details
Storage $0.02 per GB, per month The first 10 GB is free each month. See Storage pricing for details.
Long Term Storage $0.01 per GB, per month The first 10 GB is free each month. See Storage pricing for details.
Streaming Inserts $0.01 per 200 MB See Storage pricing for details.
Queries (analysis) $5 per TB First 1 TB per month is free, see On-demand pricing for details. Flat-rate pricing is also available for high-volume customers.
Loading data Free See Loading data into BigQuery.
Copying data Free See Copying a table.
Exporting data Free See Exporting data from BigQuery.
Metadata operations Free List, get, patch, update and delete calls.
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Tokyo region

Operation Pricing Details
Storage $0.023 USD per GB, per month The first 10 GB is free each month. See Storage pricing for details.
Long Term Storage $0.016 per GB, per month The first 10 GB is free each month. See Storage pricing for details.
Streaming Inserts $0.012 per 200 MB See Storage pricing for details.
Queries (analysis) $8.55 per TB First 1 TB per month is free, see On-demand pricing for details. Flat-rate pricing is also available for high-volume customers.
Loading data Free See Loading data into BigQuery.
Copying data Free See Copying a table.
Exporting data Free See Exporting data from BigQuery.
Metadata operations Free List, get, patch, update and delete calls.
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Free operations

The following table shows BigQuery operations that are free of charge. BigQuery's Quotas and Limits apply to these operations.

Operation Details
Loading data Loading Data into BigQuery
Copying data Copying an existing table
Exporting data Exporting Data from BigQuery
Metadata operations List, get, patch, update and delete calls

Pricing for loading data

Currently, you can load data from a readable data source (such as your local machine) or from Cloud Storage. When you load data into BigQuery from Cloud Storage, you are not charged for the load operation, but you do incur charges for storing the data in Cloud Storage. For more information, see Data storage on the Cloud Storage pricing page. Once the data is loaded into BigQuery, the data is subject to BigQuery's storage pricing.

Network egress charges

When you create a dataset in BigQuery, you must choose a location for the data. If you choose US, you can load data into tables in the dataset from a Cloud Storage bucket in any other region. When you load data from another region into a US dataset, there is currently no charge for internet egress.

If you choose a location other than US, you must do one of the following:

  • Load data from a Cloud Storage bucket in that region (the bucket can be either a multi-regional bucket or a regional bucket in the same region as the dataset)
  • Copy the data into a bucket in that region

When you copy data from one Cloud Storage region to another, Cloud Storage network pricing applies.

Storage pricing

Loading data into BigQuery is free, with the exception of a small charge for streamed data. Storage pricing is based on the amount of data stored in your tables, which we calculate based on the types of data you store. For a detailed explanation of how we calculate your data size, see data size calculation.

US and EU multi-region locations

Beyond your first 10 GB of storage, you are charged as follows:

Storage action Cost
Storage $0.02 per GB, per month.
Streaming Inserts $0.01 per 200 MB successfully inserted, with individual rows calculated using a 1 KB minimum size.

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Storage pricing is prorated per MB, per second. For example, if you store:

  • 100 MB for half a month, you pay $0.001 (a tenth of a cent)
  • 500 GB for half a month, you pay $5
  • 1 TB for a full month, you pay $20

Tokyo region

Beyond your first 10 GB of storage, you are charged as follows:

Storage action Cost
Storage $0.023 per GB, per month.
Streaming Inserts $0.012 per 200 MB successfully inserted, with individual rows calculated using a 1 KB minimum size.

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Storage pricing is prorated per MB, per second. For example, if you store:

  • 100 MB for half a month, you pay $0.001 (a tenth of a cent)
  • 500 GB for half a month, you pay $5
  • 1 TB for a full month, you pay $20

Free tier pricing

The first 10 GB of storage is free of charge each month (per billing account).

Long term storage pricing

If a table is not edited for 90 consecutive days, the price of storage for that table automatically drops by approximately 50 percent.

There is no degradation of performance, durability, availability, or any other functionality when a table is considered long term storage.

If the table is edited, the price reverts back to the regular storage pricing, and the 90-day timer starts counting from zero.

Anything that modifies the data in a table resets the timer (load, copy to, query with destination table):

Action Notes
append Any job that has a destination table and uses write disposition of WRITE_APPEND.
overwrite Any job that has a destination table and uses write disposition of WRITE_TRUNCATE.
streaming Ingesting data using the Tabledata.insertAll() API call

All other actions do not reset the timer, including:

  • Query from
  • Create view
  • Export
  • Copy from
  • Patch

Each partition of a partitioned table is considered separately for long-term storage pricing. If a partition hasn't been modified in the last 90 days, the data in that partition is considered long term storage and is charged at the discounted price.

For tables that reach the 90-day threshold during a billing cycle, the price is prorated accordingly.

Long term storage pricing applies only to BigQuery storage, not to external data sources.

Query pricing

Query pricing refers to the cost of running your SQL commands and user-defined functions. BigQuery charges for queries by using one metric: the number of bytes processed. You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Google Cloud Storage, Google Drive, or Google Cloud Bigtable.

Free Tier pricing

The first 1 TB of data processed per month is free of charge (per billing account).

On-demand pricing

US and EU multi-region locations

Beyond your first 1 TB of data processed in a month, you are charged as follows:

Resource Pricing
Queries $5 per TB

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

  • You aren't charged for queries that return an error, or for queries that retrieve results from the cache.
  • Charges are rounded to the nearest MB, with a minimum 10 MB data processed per table referenced by the query, and with a minimum 10 MB data processed per query.
  • Cancelling a running query job may incur charges up to the full cost for the query were it allowed to run to completion.
  • BigQuery uses a columnar data structure. You're charged according to the total data processed in the columns you select, and the total data per column is calculated based on the types of data in the column. For more information about how your data size is calculated, see data size calculation.
  • If you prefer a stable monthly cost rather than paying the on-demand price, see Flat-rate pricing.

Tokyo region

Beyond your first 1 TB of data processed in a month, you are charged as follows:

Resource Pricing
Queries $8.55 per TB

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

  • You aren't charged for queries that return an error, or for queries that retrieve results from the cache.
  • Charges are rounded to the nearest MB, with a minimum 10 MB data processed per table referenced by the query, and with a minimum 10 MB data processed per query.
  • Cancelling a running query job may incur charges up to the full cost for the query were it allowed to run to completion.
  • BigQuery uses a columnar data structure. You're charged according to the total data processed in the columns you select, and the total data per column is calculated based on the types of data in the column. For more information about how your data size is calculated, see data size calculation.
  • If you prefer a stable monthly cost rather than paying the on-demand price, see Flat-rate pricing.

Flat-rate pricing

BigQuery offers flat-rate pricing for high-volume or enterprise customers who prefer a stable monthly cost for queries rather than paying the on-demand price per TB of data processed. If you choose flat-rate pricing, the cost of all bytes processed is included in the monthly flat-rate price.

BigQuery automatically manages your slots quota based on customer history, usage, and spend. For customers with at least $40,000 in monthly analytics spend BigQuery offers several ways to increase the number of allocated slots.

Flat-rate pricing:

  • Applies only to query costs, not for storage. See Storage pricing for storage costs.
  • Applies to all projects that are linked to the billing account where flat-rate pricing is applied.
  • Provides additional BigQuery slots. See the following table for details.
  • Provides additional query concurrency for interactive queries.
Monthly Costs BigQuery Slots
$40,000 2,000
Increasing in $10,000 increments 500 more slots for each increment

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Contact your sales representative if you are interested in flat-rate pricing.

Data Manipulation Language pricing

BigQuery charges for DML queries based on the number of bytes processed by the query.

DML pricing for non-partitioned tables

For non-partitioned tables, the number of bytes processed is calculated as follows:

DML statement Bytes processed
INSERT The sum of bytes processed for all the columns referenced from the tables scanned by the query.
UPDATE The sum of bytes in all the columns referenced from the tables scanned by the query
+ the sum of bytes for all columns in the updated table at the time the UPDATE starts.
DELETE The sum of bytes in all the columns referenced from the tables scanned by the query
+ the sum of bytes for all columns in the modified table at the time the DELETE starts.
MERGE If there are only INSERT clauses in the MERGE statement, you are charged for the sum of bytes processed for all the columns referenced in all tables scanned by the query.
If there is an UPDATE or DELETE clause in the MERGE statement, you are charged for the sum of the bytes processed for all the columns referenced in the source tables scaned by the query
+ the sum of bytes for all columns in the target table (at the time the MERGE starts).

Non-partitioned table DML pricing examples

The following examples demonstrate how BigQuery calculates bytes read for DML statements that modify non-partitioned tables.

Example 1: Non-partitioned table UPDATE

table1 has two columns: col1 of type INTEGER and col2 of type STRING.

UPDATE table1 SET col1 = 1 WHERE col1 = 2;

Bytes processed in this example =

  • sum of the number of bytes in col1 +
  • sum of the number of bytes in col2
Example 2: Non-partitioned table UPDATE

table1 has two columns: col1 of type INTEGER and col2 of type STRING. table2 has one column: field1 of type INTEGER.

UPDATE table1 SET col1 = 1 WHERE col1 in (SELECT field1 from table2)

Bytes processed in this example =

  • sum of the number of bytes in table1.col1 before UPDATE +
  • sum of the number of bytes in table1.col2 before UPDATE +
  • sum of the number of bytes in table2.field1

DML pricing for partitioned tables

For partitioned tables, the number of bytes processed is calculated as follows:

DML statement Bytes processed
INSERT The sum of bytes processed for all the columns referenced in all partitions scanned by the query.
UPDATE The sum of bytes processed for all the columns referenced in all partitions for the tables scanned by the query
+ the sum of bytes for all columns in the updated or scanned partitions for the table being updated (at the time the UPDATE starts).
DELETE The sum of bytes processed for all the columns referenced in all partitions for the tables scanned by the query
+ the sum of bytes for all columns in the modified or scanned partitions for the table being modified (at the time the DELETE starts).
MERGE If there are only INSERT clauses in the MERGE statement, you are charged for the sum of bytes processed for all the columns referenced in all partitions scanned by the query.
If there is an UPDATE or DELETE clause in the MERGE statement, you are charged for the sum of the bytes processed for all the columns referenced in all partitions for the source tables scaned by the query
+ the sum of bytes for all columns in the updated, deleted or scanned partitions for the target table (at the time the MERGE starts).

Partitioned table DML pricing examples

The following examples demonstrate how BigQuery calculates bytes read for DML statements that modify ingestion-time and partitioned tables. To view the JSON schema representations for the tables used in the examples, see Tables used in examples on the Updating Partitioned Table Data Using DML Statements page.

Example 1: Ingestion-time partitioned table INSERT

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mytable has two columns: field1 of type INTEGER and field2 of type STRING.

INSERT INTO mytable (_PARTITIONTIME, field1) AS SELECT TIMESTAMP(DATE(ts)), id from mytable2

Bytes processed in this example =

  • sum of the number of bytes in mytable2.ts +
  • sum of the number of bytes in mytable2.id

The size of table into which the rows are inserted — mytable — does not affect the cost of the query.

Example 2: Partitioned table INSERT

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mycolumntable has four columns: field1 of type INTEGER, field2 of type STRING, field3 of type BOOLEAN, and ts of type TIMESTAMP.

INSERT INTO mycolumntable (ts, field1) AS SELECT ts, id from mytable2

Bytes processed in this example =

  • sum of the number of bytes in mytable2.ts +
  • sum of the number of bytes in mytable2.id

The size of table into which the rows are inserted — mycolumntable — does not affect the cost of the query.

Example 3: Ingestion-time partitioned table UPDATE

DML statement 1: Updating a single partition

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mytable has two columns: field1 of type INTEGER and field2 of type STRING.

UPDATE project.mydataset.mytable T SET T.field1 = T.field1 + 100 WHERE T._PARTITIONTIME = TIMESTAMP(“2017-05-01”) AND EXISTS (SELECT S.id from project.mydataset.mytable2 S WHERE S.id = T.field1)

Bytes processed in this example =

  • sum of the number of bytes in mytable2.id +
  • sum of the number of bytes in mytable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mytable.field2 in the "2017-05-01" partition

DML statement 2: Updating a partition based on another partition in the table

UPDATE project.mydataset.mytable T SET T._PARTITIONTIME = TIMESTAMP(“2017-06-01”), T.field1 = T.field1 + 100 WHERE T._PARTITIONTIME = TIMESTAMP(“2017-05-01”) AND EXISTS (SELECT 1 from project.mydataset.mytable S WHERE S.field1 = T.field1 AND S._PARTITIONTIME = TIMESTAMP("2017-06-01") )

Bytes processed in this example =

  • sum of the number of bytes in mytable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mytable.field2 in the "2017-05-01" partition +
  • sum of the number of bytes in mytable.field1 in the "2017-06-01" partition +
  • sum of the number of bytes in mytable.field2 in the "2017-06-01" partition

In this case, the cost of the UPDATE statement is the sum of sizes of all fields in the partitions corresponding to "2017-05-01" and "2017-06-01".

Example 4: Partitioned table UPDATE

DML statement 1: Updating a single partition

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mycolumntable has four columns: field1 of type INTEGER, field2 of type STRING, field3 of type BOOLEAN, and ts of type TIMESTAMP.

UPDATE project.mydataset.mycolumntable T SET T.field1 = T.field1 + 100 WHERE DATE(T.ts) = “2017-05-01” AND EXISTS (SELECT S.id from project.mydataset.mytable2 S WHERE S.id = T.field1)

Bytes processed in this example =

  • sum of the number of bytes in mytable2.id +
  • sum of the number of bytes in mycolumntable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field2 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field3 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.ts in the "2017-05-01" partition

DML statement 2: Updating a partition based on another partition in the table

UPDATE project.mydataset.mycolumntable T SET T.ts = TIMESTAMP(“2017-06-01”), T.field1 = T.field1 + 100 WHERE DATE(T.ts) = “2017-05-01” AND EXISTS (SELECT 1 from project.mydataset.mycolumntable S WHERE S.field1 = T.field1 AND DATE(S.ts) = "2017-06-01")

Bytes processed in this example =

  • sum of the number of bytes in mycolumntable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field2 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field3 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.ts in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field1 in the "2017-06-01" partition +
  • sum of the number of bytes in mycolumntable.field2 in the "2017-06-01" partition +
  • sum of the number of bytes in mycolumntable.field3 in the "2017-06-01" partition +
  • sum of the number of bytes in mycolumntable.ts in the "2017-06-01" partition

In this case, the cost of the UPDATE statement is the sum of sizes of all fields in the partitions corresponding to "2017-05-01" and "2017-06-01".

Example 5: Ingestion-time partitioned table DELETE

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mytable has two columns: field1 of type INTEGER and field2 of type STRING.

DELETE project.mydataset.mytable T WHERE T._PARTITIONTIME = TIMESTAMP(“2017-05-01”) AND EXISTS (SELECT S.id from project.mydataset.mytable2 S WHERE S.id = T.field1)

Bytes processed in this example =

  • sum of the number of bytes in mytable2.id +
  • sum of the number of bytes in mytable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mytable.field2 in the "2017-05-01" partition
Example 6: Partitioned table DELETE

mytable2 has two columns: id of type INTEGER and ts of type TIMESTAMP. mycolumntable has four columns: field1 of type INTEGER, field2 of type STRING, field3 of type BOOLEAN, and ts of type TIMESTAMP.

DELETE project.mydataset.mycolumntable T WHERE DATE(T.ts) =“2017-05-01” AND EXISTS (SELECT S.id from project.mydataset.mytable2 S WHERE S.id = T.field1)

Bytes processed in this example =

  • sum of the number of bytes in mytable2.id +
  • sum of the number of bytes in mycolumntable.field1 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field2 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.field3 in the "2017-05-01" partition +
  • sum of the number of bytes in mycolumntable.ts in the "2017-05-01" partition

Data size calculation

When you load data into BigQuery or query the data, you're charged according to the data size. We calculate your data size based on the size of each data type.

Data type Size
INT64/INTEGER 8 bytes
FLOAT64/FLOAT 8 bytes
NUMERIC 16 bytes
BOOL/BOOLEAN 1 byte
STRING 2 bytes + the UTF-8 encoded string size
BYTES 2 bytes + the UTF-8 encoded string size
DATE 8 bytes
DATETIME 8 bytes
TIME 8 bytes
TIMESTAMP 8 bytes
STRUCT/RECORD 0 bytes + the size of the contained fields

Null values for any data type are calculated as 0 bytes.

A repeated column is stored as an array, and the size is calculated based on the number of values. For example, an integer column (INT64) that is repeated (ARRAY<INT64>) and contains 4 entries is calculated as 32 bytes (4 entries x 8 bytes).

Sample query costs for on-demand pricing

When you run a query, you're charged according to the total data processed in the columns you select, even if you set an explicit LIMIT on the results. The total bytes per column is calculated based on the types of data in the column. For more information about how we calculate your data size, see data size calculation.

The following table shows several sample queries and a description of how many bytes are processed for each query.

Sample query Bytes processed
SELECT
  corpus,
  word
FROM
  publicdata:samples.shakespeare
LIMIT 1;
Total size of the corpus + size of word columns
SELECT
  corpus
FROM
  (SELECT
     *
   FROM
     publicdata:samples.shakespeare);
Total size of the corpus column
SELECT
  COUNT(*)
FROM
  publicdata:samples.shakespeare;
No bytes processed
SELECT
  COUNT(corpus)
FROM
  publicdata:samples.shakespeare;
Total size of the corpus column
SELECT
  COUNT(*)
FROM
  publicdata:samples.shakespeare
WHERE
  corpus = 'hamlet';
Total size of the corpus column
SELECT
  shakes.corpus,
  wiki.language
FROM
  publicdata:samples.shakespeare AS shakes
  JOIN EACH
  publicdata:samples.wikipedia AS wiki
  ON shakes.corpus = wiki.title;
Total size of the shakes.corpus, wiki.language and wiki.title columns

BigQuery Data Transfer Service pricing

The BigQuery Data Transfer Service charges monthly on a prorated basis. You are charged as follows:

Source application Monthly charge (prorated)
Google AdWords

$2.50 per unique Customer ID — ExternalCustomerIDs in the Customer table, including zero impression Customer IDs.

DoubleClick Campaign Manager

$2.50 per unique Advertiser ID — Advertiser IDs in the impression table.

DoubleClick for Publishers

$100 per Network ID

YouTube Channel and YouTube Content Owner

No charge through July 1, 2018. Pricing for YouTube to be announced at a later date.

After data is transferred to BigQuery, standard BigQuery storage and query pricing applies. For additional pricing details, contact Sales.

Calculating unique IDs

Each transfer you create generates 1 or more runs per day. Each run maintains a record of each unique ID encountered and the date the transfer run completes. IDs are only counted on the day the transfer completes. For example, if a transfer run begins on July 14th but completes on July 15th, the unique IDs are counted on July 15th.

If a unique ID is encountered in more than one transfer run on a particular day, it is counted only once. Unique IDs are counted separately for different transfers. If a unique ID is encountered in runs for two separate transfers, the ID is counted twice.

Examples

Example 1: You have 1 transfer with 3 runs that complete on the same day.

  • The first run records the following unique IDs: A, B, and C
  • The second run records: A
  • The third run records: C and D

Since all runs finish on the same day, you are charged based on 4 unique IDs — A, B, C, D. Because ID A and ID C were recorded in two different runs that completed on the same day, IDs A and C are counted only once. If the 3 transfer runs complete every day for a month, your monthly charge is based on 4 unique IDs. If the transfer runs complete fewer times than the number of days in the month in which they run, the charges are prorated.

Example 2: You have multiple transfers with runs that complete on the same day.

  • Transfer 1 runs and records the following unique IDs: A, B, and C
  • Transfer 2 runs and records: A
  • Transfer 3 runs and records: C and D

Because the unique IDs are counted in runs for different transfers, you are charged based on 6 unique IDs — A, B, and C from transfer 1's run; A from transfer 2's run; and C and D from transfer 3's run. If the transfer runs complete fewer times than the number of days in the month in which they run, the charges are prorated.

Calculating backfill charges

If you schedule a backfill, a transfer run is scheduled for each day. You are then charged based on the method described in Calculating unique IDs.

Stopping BigQuery Data Transfer Service charges

To stop incurring charges, disable or delete your transfer.