Vertex AI pricing

Prices are listed in US Dollars (USD). If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Vertex AI pricing compared to legacy product pricing

The costs for Vertex AI remain the same as they are for the legacy AI Platform and AutoML products that Vertex AI supersedes, with the following exceptions:

  • Legacy AI Platform Prediction and AutoML Tables predictions supported lower-cost, lower-performance machine types that aren't supported for Vertex AI Prediction and AutoML tabular.

  • Legacy AI Platform Prediction supported scale-to-zero, which isn't supported for Vertex AI Prediction.

Vertex AI also offers more ways to optimize costs, such as the following:

Pricing for Generative AI on Vertex AI

For Generative AI on Vertex AI pricing information, see Pricing for Generative AI on Vertex AI.

Pricing for AutoML models

For Vertex AI AutoML models, you pay for three main activities:

  • Training the model
  • Deploying the model to an endpoint
  • Using the model to make predictions

Vertex AI uses predefined machine configurations for Vertex AutoML models, and the hourly rate for these activities reflects the resource usage.

The time required to train your model depends on the size and complexity of your training data. Models must be deployed before they can provide online predictions or online explanations.

You pay for each model deployed to an endpoint, even if no prediction is made. You must undeploy your model to stop incurring further charges. Models that are not deployed or have failed to deploy are not charged.

You pay only for compute hours used; if training fails for any reason other than a user-initiated cancellation, you are not billed for the time. You are charged for training time if you cancel the operation.

Select a model type below for pricing information.

Image data

Operation Price per node hour (classification) Price per node hour (object detection)
Training $3.465 $3.465
Training (Edge on-device model) $18.00 $18.00
Deployment and online prediction $1.375 $2.002
Batch prediction $2.222 $2.222

Video data

Operation Price per node hour (classification, object tracking) Price per node hour (action recognition)
Training $3.234 $3.300
Training (Edge on-device model) $10.78 $11.00
Predictions $0.462 $0.550

Tabular data

Operation Price per node hour for classification/regression Price for forecasting
Training $21.252 Refer to Vertex AI Forecast
Prediction Same price as predictions for custom-trained models.
Vertex AI performs batch prediction using 40 n1-highmem-8 machines.
Refer to Vertex AI Forecast

Text data

Operation Price
Legacy data upload (PDF only)

First 1,000 pages free each month

$1.50 per 1,000 pages

$0.60 per 1,000 pages over 5,000,000

Training $3.30 per hour
Deployment $0.05 per hour
Prediction

$5.00 per 1,000 text records

$25.00 per 1,000 document pages, such as PDF files (legacy only)

Prices for Vertex AutoML text prediction requests are computed based on the number of text records you send for analysis. A text record is plain text of up to 1,000 Unicode characters (including whitespace and any markup such as HTML or XML tags).

If the text provided in a prediction request contains more than 1,000 characters, it counts as one text record for each 1,000 characters. For example, if you send three requests that contain 800, 1,500, and 600 characters respectively, you would be charged for four text records: one for the first request (800), two for the second request (1,500), and one for the third request (600).

Prediction charges for Vertex Explainable AI

Compute associated with Vertex Explainable AI is charged at same rate as prediction. However, explanations take longer to process than normal predictions, so heavy usage of Vertex Explainable AI along with auto-scaling could result in more nodes being started, which would increase prediction charges.

Vertex AI Forecast

AutoML

Stage Pricing
Prediction $0.2 per 1K data points* (0-1M points)
$0.1 per 1K data points* (1M-50M points)
$0.02 per 1K data points* (>50M points)
Training $21.25/hr in all regions
Explainable AI Explainability using Shapley values. Refer to Vertex AI Prediction and Explanation pricing page.

* A prediction data point is one time point in the forecast horizon. For example, with daily granularity a 7-day horizon is 7 points per each time series.

  • Up to 5 prediction quantiles can be included at no additional cost.
  • The number of data points consumed per tier is refreshed monthly.

ARIMA+

Stage Pricing
Prediction $5.00 per TB
Training $250.00 per TB x Number of Candidate Models x Number of Backtesting Windows*
Explainable AI Explainability with time series decomposition does not add any additional cost. Explainability using Shapley values is not supported.

Refer to the BigQuery ML pricing page for additional details. Each training and prediction job incurs the cost of 1 managed pipeline run, as described in Vertex AI pricing.

* A backtesting window is created for each period in the test set. The AUTO_ARIMA_MAX_ORDER used determines the number of candidate models. It ranges from 6-42 for models with multiple time series.

Custom-trained models

Training

The tables below provide the approximate price per hour of various training configurations. You can choose a custom configuration of selected machine types. To calculate pricing, sum the costs of the virtual machines you use.

If you use Compute Engine machine types and attach accelerators, the cost of the accelerators is separate. To calculate this cost, multiply the prices in the table of accelerators below by how many machine hours of each type of accelerator you use.

Machine types

*This amount includes GPU price, since this instance type always requires a fixed number of GPU accelerators.
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Accelerators

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

* The price for training using a Cloud TPU Pod is based on the number of cores in the Pod. The number of cores in a pod is always a multiple of 32. To determine the price of training on a Pod that has more than 32 cores, take the price for a 32-core Pod, and multiply it by the number of cores, divided by 32. For example, for a 128-core Pod, the price is (32-core Pod price) * (128/32). For information about which Cloud TPU Pods are available for a specific region, see System Architecture in the Cloud TPU documentation.

Disks

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

You are charged for training your models from the moment when resources are provisioned for a job until the job finishes.

Scale tiers for predefined configurations (AI Platform Training)

You can control the type of processing cluster to use when training your model. The simplest way is to choose from one of the predefined configurations called scale tiers. Read more about scale tiers.

Machine types for custom configurations

If you use Vertex AI or select CUSTOM as your scale tier for AI Platform Training, you have control over the number and type of virtual machines to use for the cluster's master, worker and parameter servers. Read more about machine types for Vertex AI and machine types for AI Platform Training.

The cost of training with a custom processing cluster is the sum of all the machines you specify. You are charged for the total time of the job, not for the active processing time of individual machines.

Gen AI Evaluation Service

Vertex AI Gen AI Evaluation Service charges string input and output fields by every 1,000 characters. One character is defined as one Unicode character. White space is excluded from the count. Failed evaluation request, including filtered response, will not be charged for input nor output. At the end of each billing cycle, fractions of one cent ($0.01) are rounded to one cent.

Gen AI Evaluation Service is generally available (GA). Pricing took effect on September 27, 2024.

Metric Pricing
Pointwise Input: $0.005 per 1k characters
Output: $0.015 per 1k characters
Pairwise Input: $0.005 per 1k characters
Output: $0.015 per 1k characters

Computation-based metrics are charged at $0.00003 per 1k characters for input and $0.00009 per 1k characters for output. They are referred to as Automatic Metric in SKU.

Metric Name Type
Exact Match Computation-based
Bleu Computation-based
Rouge Computation-based
Tool Call Valid Computation-based
Tool Name Match Computation-based
Tool Parameter Key Match Computation-based
Tool Parameter KV Match Computation-based

Prices are listed in US Dollars (USD). If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Legacy model-based metrics are charged at $0.005 per 1k characters for input and $0.015 per 1k characters for output.

Metric Name Type
Coherence Pointwise
Fluency Pointwise
Fulfillment Pointwise
Safety Pointwise
Groundedness Pointwise
Summarization Quality Pointwise
Summarization Helpfulness Pointwise
Summarization Verbosity Pointwise
Question Answering Quality Pointwise
Question Answering Relevance Pointwise
Question Answering Helpfulness Pointwise
Question Answering Correctness Pointwise
Pairwise Summarization Quality Pairwise
Pairwise Question Answering Quality Pairwise
Prices are listed in US Dollars (USD). If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Ray on Vertex AI

Training

The tables below provide the approximate price per hour of various training configurations. You can choose a custom configuration of selected machine types. To calculate pricing, sum the costs of the virtual machines you use.

If you use Compute Engine machine types and attach accelerators, the cost of the accelerators is separate. To calculate this cost, multiply the prices in the table of accelerators below by how many machine hours of each type of accelerator you use.

Machine types

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Accelerators

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

* The price for training using a Cloud TPU Pod is based on the number of cores in the Pod. The number of cores in a pod is always a multiple of 32. To determine the price of training on a Pod that has more than 32 cores, take the price for a 32-core Pod, and multiply it by the number of cores, divided by 32. For example, for a 128-core Pod, the price is (32-core Pod price) * (128/32). For information about which Cloud TPU Pods are available for a specific region, see System Architecture in the Cloud TPU documentation.

Disks

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

You are charged for training your models from the moment when resources are provisioned for a job until the job finishes.

Prediction and explanation

The following tables provide the prices of batch prediction, online prediction, and online explanation per node hour. A node hour represents the time a virtual machine spends running your prediction job or waiting in an active state (an endpoint with one or more models deployed) to handle prediction or explanation requests.

Choose a region to view its pricing table:

Pricing for the Americas

The following tables provide the price per node hour for each machine type.

E2 Series

e2-standard-2approximations:

us-west2$0.0926
us-west4$0.0868
us-east4$0.0868
northamerica-northeast1$0.0848
northamerica-northeast2$0.0848
southamerica-east1$0.1223
Other Americas regions$0.0771
e2-standard-4approximations:
us-west2$0.1851
us-west4$0.1736
us-east4$0.1736
northamerica-northeast1$0.1697
northamerica-northeast2$0.1697
southamerica-east1$0.2446
Other Americas regions$0.1541
e2-standard-8approximations:
us-west2$0.3702
us-west4$0.3471
us-east4$0.3471
northamerica-northeast1$0.3393
northamerica-northeast2$0.3393
southamerica-east1$0.4893
Other Americas regions$0.3082
e2-standard-16approximations:
us-west2$0.7405
us-west4$0.6942
us-east4$0.6942
northamerica-northeast1$0.6787
northamerica-northeast2$0.6787
southamerica-east1$0.9786
Other Americas regions$0.6165
e2-standard-32approximations:
us-west2$1.4809
us-west4$1.3885
us-east4$1.3885
northamerica-northeast1$1.3574
northamerica-northeast2$1.3574
southamerica-east1$1.9572
Other Americas regions$1.2329
e2-highmem-2approximations:
us-west2$0.1249
us-west4$0.1171
us-east4$0.1171
northamerica-northeast1$0.1144
northamerica-northeast2$0.1144
southamerica-east1$0.165
Other Americas regions$0.1039
e2-highmem-4approximations:
us-west2$0.2497
us-west4$0.2341
us-east4$0.2341
northamerica-northeast1$0.2289
northamerica-northeast2$0.2289
southamerica-east1$0.33
Other Americas regions$0.2079
e2-highmem-8approximations:
us-west2$0.4994
us-west4$0.4682
us-east4$0.4682
northamerica-northeast1$0.4578
northamerica-northeast2$0.4578
southamerica-east1$0.66
Other Americas regions$0.4158
e2-highmem-16approximations:
us-west2$0.9989
us-west4$0.9365
us-east4$0.9365
northamerica-northeast1$0.9155
northamerica-northeast2$0.9155
southamerica-east1$1.3201
Other Americas regions$0.8316
e2-highcpu-2approximations:
us-west2$0.0683
us-west4$0.0641
us-east4$0.0641
northamerica-northeast1$0.0626
northamerica-northeast2$0.0626
southamerica-east1$0.0903
Other Americas regions$0.0569
e2-highcpu-4approximations:
us-west2$0.1367
us-west4$0.1281
us-east4$0.1281
northamerica-northeast1$0.1253
northamerica-northeast2$0.1253
southamerica-east1$0.1806
Other Americas regions$0.1138
e2-highcpu-8approximations:
us-west2$0.2733
us-west4$0.2563
us-east4$0.2563
northamerica-northeast1$0.2505
northamerica-northeast2$0.2505
southamerica-east1$0.3612
Other Americas regions$0.2276
e2-highcpu-16approximations:
us-west2$0.5467
us-west4$0.5126
us-east4$0.5126
northamerica-northeast1$0.501
northamerica-northeast2$0.501
southamerica-east1$0.7225
Other Americas regions$0.4551
e2-highcpu-32approximations:
us-west2$1.0933
us-west4$1.0252
us-east4$1.0252
northamerica-northeast1$1.0021
northamerica-northeast2$1.0021
southamerica-east1$1.4449
Other Americas regions$0.9102

N1 Series

n1-standard-2approximations:

us-east4$0.123
northamerica-northeast1$0.1203
Other Americas regions$0.1093
n1-standard-4approximations:
us-east4$0.2461
northamerica-northeast1$0.2405
Other Americas regions$0.2186
n1-standard-8approximations:
us-east4$0.4922
northamerica-northeast1$0.4811
Other Americas regions$0.4372
n1-standard-16approximations:
us-east4$0.9843
northamerica-northeast1$0.9622
Other Americas regions$0.8744
n1-standard-32approximations:
us-east4$1.9687
northamerica-northeast1$1.9243
Other Americas regions$1.7488
n1-highmem-2approximations:
us-east4$0.1532
northamerica-northeast1$0.1498
Other Americas regions$0.1361
n1-highmem-4approximations:
us-east4$0.3064
northamerica-northeast1$0.2995
Other Americas regions$0.2723
n1-highmem-8approximations:
us-east4$0.6129
northamerica-northeast1$0.5991
Other Americas regions$0.5445
n1-highmem-16approximations:
us-east4$1.2257
northamerica-northeast1$1.1982
Other Americas regions$1.089
n1-highcpu-2approximations:
us-east4$0.0918
northamerica-northeast1$0.0897
Other Americas regions$0.0815
n1-highcpu-4approximations:
us-east4$0.1835
northamerica-northeast1$0.1794
Other Americas regions$0.163
n1-highcpu-8approximations:
us-east4$0.3671
northamerica-northeast1$0.3588
Other Americas regions$0.326
n1-highcpu-16approximations:
us-east4$0.7341
northamerica-northeast1$0.7176
Other Americas regions$0.6519
n1-highcpu-32approximations:
us-east4$1.4683
northamerica-northeast1$1.4352
Other Americas regions$1.3039

N2 Series

n2-standard-2approximations:

northamerica_northeast1$0.123
northamerica_northeast2$0.123
southamerica_east1$0.1773
us_central1$0.1117
us_east1$0.1117
us_east4$0.1258
us_south1$0.1318
us_west1$0.1117
us_west2$0.1341
us_west3$0.1341
us_west4$0.1258
n2-standard-4approximations:
northamerica_northeast1$0.2459
northamerica_northeast2$0.2459
southamerica_east1$0.3546
us_central1$0.2234
us_east1$0.2234
us_east4$0.2516
us_south1$0.2636
us_west1$0.2234
us_west2$0.2683
us_west3$0.2683
us_west4$0.2516
n2-standard-8approximations:
northamerica_northeast1$0.4918
northamerica_northeast2$0.4918
southamerica_east1$0.7091
us_central1$0.4467
us_east1$0.4467
us_east4$0.5031
us_south1$0.5272
us_west1$0.4467
us_west2$0.5366
us_west3$0.5366
us_west4$0.5031
n2-standard-16approximations:
northamerica_northeast1$0.9836
northamerica_northeast2$0.9836
southamerica_east1$1.4183
us_central1$0.8935
us_east1$0.8935
us_east4$1.0063
us_south1$1.0543
us_west1$0.8935
us_west2$1.0732
us_west3$1.0732
us_west4$1.0062
n2-standard-32approximations:
northamerica_northeast1$1.9673
northamerica_northeast2$1.9673
southamerica_east1$2.8365
us_central1$1.787
us_east1$1.787
us_east4$2.0126
us_south1$2.1087
us_west1$1.787
us_west2$2.1464
us_west3$2.1464
us_west4$2.0125
n2-highmem-2approximations:
northamerica_northeast1$0.1659
northamerica_northeast2$0.1659
southamerica_east1$0.2392
us_central1$0.1507
us_east1$0.1507
us_east4$0.1697
us_south1$0.1778
us_west1$0.1507
us_west2$0.181
us_west3$0.181
us_west4$0.1697
n2-highmem-4approximations:
northamerica_northeast1$0.3317
northamerica_northeast2$0.3317
southamerica_east1$0.4783
us_central1$0.3013
us_east1$0.3013
us_east4$0.3394
us_south1$0.3556
us_west1$0.3013
us_west2$0.3619
us_west3$0.3619
us_west4$0.3393
n2-highmem-8approximations:
northamerica_northeast1$0.6634
northamerica_northeast2$0.6634
southamerica_east1$0.9566
us_central1$0.6027
us_east1$0.6027
us_east4$0.6787
us_south1$0.7112
us_west1$0.6027
us_west2$0.7239
us_west3$0.7239
us_west4$0.6787
n2-highmem-16approximations:
northamerica_northeast1$1.3269
northamerica_northeast2$1.3269
southamerica_east1$1.9132
us_central1$1.2053
us_east1$1.2053
us_east4$1.3574
us_south1$1.4223
us_west1$1.2053
us_west2$1.4477
us_west3$1.4477
us_west4$1.3574
n2-highcpu-2approximations:
northamerica_northeast1$0.0908
northamerica_northeast2$0.0908
southamerica_east1$0.1309
us_central1$0.0825
us_east1$0.0825
us_east4$0.0929
us_south1$0.0973
us_west1$0.0825
us_west2$0.099
us_west3$0.099
us_west4$0.0929
n2-highcpu-4approximations:
northamerica_northeast1$0.1815
northamerica_northeast2$0.1815
southamerica_east1$0.2618
us_central1$0.1649
us_east1$0.1649
us_east4$0.1857
us_south1$0.1946
us_west1$0.1649
us_west2$0.1981
us_west3$0.1981
us_west4$0.1857
n2-highcpu-8approximations:
northamerica_northeast1$0.3631
northamerica_northeast2$0.3631
southamerica_east1$0.5235
us_central1$0.3298
us_east1$0.3298
us_east4$0.3715
us_south1$0.3892
us_west1$0.3298
us_west2$0.3961
us_west3$0.3961
us_west4$0.3714
n2-highcpu-16approximations:
northamerica_northeast1$0.7262
northamerica_northeast2$0.7262
southamerica_east1$1.0471
us_central1$0.6596
us_east1$0.6596
us_east4$0.7429
us_south1$0.7783
us_west1$0.6596
us_west2$0.7923
us_west3$0.7923
us_west4$0.7429
n2-highcpu-32approximations:
northamerica_northeast1$1.4523
northamerica_northeast2$1.4523
southamerica_east1$2.0941
us_central1$1.3192
us_east1$1.3192
us_east4$1.4858
us_south1$1.5567
us_west1$1.3192
us_west2$1.5846
us_west3$1.5846
us_west4$1.4858

N2D Series

n2d-standard-2approximations:

northamerica_northeast1$0.107
southamerica_east1$0.1542
us_central1$0.0972
us_east1$0.0972
us_east4$0.1094
us_west1$0.0972
us_west2$0.1167
us_west4$0.1094
n2d-standard-4approximations:
northamerica_northeast1$0.2139
southamerica_east1$0.3085
us_central1$0.1943
us_east1$0.1943
us_east4$0.2189
us_west1$0.1943
us_west2$0.2334
us_west4$0.2189
n2d-standard-8approximations:
northamerica_northeast1$0.4279
southamerica_east1$0.617
us_central1$0.3887
us_east1$0.3887
us_east4$0.4377
us_west1$0.3887
us_west2$0.4668
us_west4$0.4377
n2d-standard-16approximations:
northamerica_northeast1$0.8558
southamerica_east1$1.2339
us_central1$0.7773
us_east1$0.7773
us_east4$0.8755
us_west1$0.7773
us_west2$0.9336
us_west4$0.8755
n2d-standard-32approximations:
northamerica_northeast1$1.7116
southamerica_east1$2.4678
us_central1$1.5547
us_east1$1.5547
us_east4$1.7509
us_west1$1.5547
us_west2$1.8673
us_west4$1.7509
n2d-highmem-2approximations:
northamerica_northeast1$0.1443
southamerica_east1$0.2081
us_central1$0.1311
us_east1$0.1311
us_east4$0.1476
us_west1$0.1311
us_west2$0.1574
us_west4$0.1476
n2d-highmem-4approximations:
northamerica_northeast1$0.2886
southamerica_east1$0.4161
us_central1$0.2622
us_east1$0.2622
us_east4$0.2952
us_west1$0.2622
us_west2$0.3149
us_west4$0.2952
n2d-highmem-8approximations:
northamerica_northeast1$0.5772
southamerica_east1$0.8323
us_central1$0.5243
us_east1$0.5243
us_east4$0.5905
us_west1$0.5243
us_west2$0.6297
us_west4$0.5905
n2d-highmem-16approximations:
northamerica_northeast1$1.1545
southamerica_east1$1.6646
us_central1$1.0486
us_east1$1.0486
us_east4$1.181
us_west1$1.0486
us_west2$1.2595
us_west4$1.181
n2d-highcpu-2approximations:
northamerica_northeast1$0.079
southamerica_east1$0.1139
us_central1$0.0717
us_east1$0.0717
us_east4$0.0808
us_west1$0.0717
us_west2$0.0862
us_west4$0.0808
n2d-highcpu-4approximations:
northamerica_northeast1$0.1579
southamerica_east1$0.2277
us_central1$0.1435
us_east1$0.1435
us_east4$0.1616
us_west1$0.1435
us_west2$0.1723
us_west4$0.1616
n2d-highcpu-8approximations:
northamerica_northeast1$0.3159
southamerica_east1$0.4555
us_central1$0.2869
us_east1$0.2869
us_east4$0.3232
us_west1$0.2869
us_west2$0.3446
us_west4$0.3232
n2d-highcpu-16approximations:
northamerica_northeast1$0.6318
southamerica_east1$0.9109
us_central1$0.5739
us_east1$0.5739
us_east4$0.6463
us_west1$0.5739
us_west2$0.6893
us_west4$0.6463
n2d-highcpu-32approximations:
northamerica_northeast1$1.2636
southamerica_east1$1.8219
us_central1$1.1477
us_east1$1.1477
us_east4$1.2927
us_west1$1.1477
us_west2$1.3786
us_west4$1.2927

C2 Series

c2-standard-4approximations:

northamerica_northeast1$0.264
southamerica_east1$0.3812
us_central1$0.24
us_east1$0.24
us_east4$0.2702
us_west1$0.24
us_west2$0.2884
us_west3$0.2889
us_west4$0.2702
c2-standard-8approximations:
northamerica_northeast1$0.5281
southamerica_east1$0.7623
us_central1$0.4801
us_east1$0.4801
us_east4$0.5405
us_west1$0.4801
us_west2$0.5768
us_west3$0.5778
us_west4$0.5405
c2-standard-16approximations:
northamerica_northeast1$1.0562
southamerica_east1$1.5246
us_central1$0.9601
us_east1$0.9601
us_east4$1.081
us_west1$0.9601
us_west2$1.1537
us_west3$1.1555
us_west4$1.081
c2-standard-30approximations:
northamerica_northeast1$1.9803
southamerica_east1$2.8587
us_central1$1.8002
us_east1$1.8002
us_east4$2.0269
us_west1$1.8002
us_west2$2.1631
us_west3$2.1666
us_west4$2.0269
c2-standard-60approximations:
northamerica_northeast1$3.9606
southamerica_east1$5.7173
us_central1$3.6004
us_east1$3.6004
us_east4$4.0537
us_west1$3.6004
us_west2$4.3263
us_west3$4.3332
us_west4$4.0537

C2D Series

c2d-standard-2approximations:

us_central1$0.1044
us_east1$0.1044
us_east4$0.1176
us_west1$0.1044
us_west4$0.1176
c2d-standard-4approximations:
us_central1$0.2088
us_east1$0.2088
us_east4$0.2352
us_west1$0.2088
us_west4$0.2352
c2d-standard-8approximations:
us_central1$0.4177
us_east1$0.4177
us_east4$0.4704
us_west1$0.4177
us_west4$0.4704
c2d-standard-16approximations:
us_central1$0.8353
us_east1$0.8353
us_east4$0.9408
us_west1$0.8353
us_west4$0.9408
c2d-standard-32approximations:
us_central1$1.6707
us_east1$1.6707
us_east4$1.8815
us_west1$1.6707
us_west4$1.8815
c2d-standard-56approximations:
us_central1$2.9237
us_east1$2.9237
us_east4$3.2926
us_west1$2.9237
us_west4$3.2926
c2d-standard-112approximations:
us_central1$5.8474
us_east1$5.8474
us_east4$6.5853
us_west1$5.8474
us_west4$6.5853
c2d-highmem-2approximations:
us_central1$0.1408
us_east1$0.1408
us_east4$0.1586
us_west1$0.1408
us_west4$0.1586
c2d-highmem-4approximations:
us_central1$0.2817
us_east1$0.2817
us_east4$0.3172
us_west1$0.2817
us_west4$0.3172
c2d-highmem-8approximations:
us_central1$0.5634
us_east1$0.5634
us_east4$0.6344
us_west1$0.5634
us_west4$0.6344
c2d-highmem-16approximations:
us_central1$1.1267
us_east1$1.1267
us_east4$1.2689
us_west1$1.1267
us_west4$1.2689
c2d-highmem-32approximations:
us_central1$2.2534
us_east1$2.2534
us_east4$2.5377
us_west1$2.2534
us_west4$2.5377
c2d-highmem-56approximations:
us_central1$3.9435
us_east1$3.9435
us_east4$4.441
us_west1$3.9435
us_west4$4.441
c2d-highmem-112approximations:
us_central1$7.887
us_east1$7.887
us_east4$8.882
us_west1$7.887
us_west4$8.882
c2d-highcpu-2approximations:
us_central1$0.0862
us_east1$0.0862
us_east4$0.0971
us_west1$0.0862
us_west4$0.0971
c2d-highcpu-4approximations:
us_central1$0.1724
us_east1$0.1724
us_east4$0.1942
us_west1$0.1724
us_west4$0.1942
c2d-highcpu-8approximations:
us_central1$0.3448
us_east1$0.3448
us_east4$0.3884
us_west1$0.3448
us_west4$0.3884
c2d-highcpu-16approximations:
us_central1$0.6896
us_east1$0.6896
us_east4$0.7767
us_west1$0.6896
us_west4$0.7767
c2d-highcpu-32approximations:
us_central1$1.3793
us_east1$1.3793
us_east4$1.5534
us_west1$1.3793
us_west4$1.5534
c2d-highcpu-56approximations:
us_central1$2.4138
us_east1$2.4138
us_east4$2.7185
us_west1$2.4138
us_west4$2.7185
c2d-highcpu-112approximations:
us_central1$4.8275
us_east1$4.8275
us_east4$5.4369
us_west1$4.8275
us_west4$5.4369

C3 Series

c3-highcpu-4approximations:

us_central1$0.1982
us_east1$0.1982
us_east4$0.2232
c3-highcpu-8approximations:
us_central1$0.3965
us_east1$0.3965
us_east4$0.4465
c3-highcpu-22approximations:
us_central1$1.0903
us_east1$1.0903
us_east4$1.2278
c3-highcpu-44approximations:
us_central1$2.1806
us_east1$2.1806
us_east4$2.4556
c3-highcpu-88approximations:
us_central1$4.3613
us_east1$4.3613
us_east4$4.9113
c3-highcpu-176approximations:
us_central1$8.7226
us_east1$8.7226
us_east4$9.8226

A2 Series

a2-highgpu-1gapproximations:

us-central1$4.2245
a2-highgpu-2gapproximations:
us-central1$8.449
a2-highgpu-4gapproximations:
us-central1$16.898
a2-highgpu-8gapproximations:
us-central1$33.796
a2-megagpu-16gapproximations:
us-central1$64.1021
a2-ultragpu-1gapproximations:
us-central1$5.7818
us-east4$6.3524
a2-ultragpu-2gapproximations:
us-central1$11.5637
us-east4$12.7048
a2-ultragpu-4gapproximations:
us-central1$23.1274
us-east4$25.4095
a2-ultragpu-8gapproximations:
us-central1$46.2548
us-east4$50.8191

A3 Series

a3-highgpu-8gapproximations:
us-central1$101.0074
us-east4$101.0074

G2 Series

g2-standard-4approximations:

us-central1$0.8129
g2-standard-8approximations:
us-central1$0.9818
g2-standard-12approximations:
us-central1$1.1507
g2-standard-16approximations:
us-central1$1.3196
g2-standard-24approximations:
us-central1$2.3014
g2-standard-32approximations:
us-central1$1.9951
g2-standard-48approximations:
us-central1$4.6028
g2-standard-96approximations:
us-central1$9.2055

TPU v5e
ct5lp-hightpu-1t Approximations:
us-west1 $1.38
ct5lp-hightpu-4t Approximations:
us-west1 $5.52
ct5lp-hightpu-8t Approximations:
us-west1 $11.04

Pricing for Europe

The following tables provide the price per node hour for each machine type.

E2 Series

e2-standard-2approximations:

europe-west1$0.0848
europe-west2$0.0993
europe-west3$0.0993
europe-west4$0.0848
europe-west6$0.1078
europe-west9$0.1079
e2-standard-4approximations:
europe-west1$0.1695
europe-west2$0.1986
europe-west3$0.1986
europe-west4$0.1697
europe-west6$0.2156
europe-west9$0.2158
e2-standard-8approximations:
europe-west1$0.3391
europe-west2$0.3971
europe-west3$0.3971
europe-west4$0.3393
europe-west6$0.4313
europe-west9$0.4316
e2-standard-16approximations:
europe-west1$0.6782
europe-west2$0.7943
europe-west3$0.7943
europe-west4$0.6787
europe-west6$0.8626
europe-west9$0.8631
e2-standard-32approximations:
europe-west1$1.3563
europe-west2$1.5885
europe-west3$1.5885
europe-west4$1.3574
europe-west6$1.7251
europe-west9$1.7262
e2-highmem-2approximations:
europe-west1$0.1144
europe-west2$0.1339
europe-west3$0.1339
europe-west4$0.1144
europe-west6$0.1454
europe-west9$0.1455
e2-highmem-4approximations:
europe-west1$0.2287
europe-west2$0.2679
europe-west3$0.2679
europe-west4$0.2289
europe-west6$0.2909
europe-west9$0.2911
e2-highmem-8approximations:
europe-west1$0.4574
europe-west2$0.5357
europe-west3$0.5357
europe-west4$0.4578
europe-west6$0.5818
europe-west9$0.5822
e2-highmem-16approximations:
europe-west1$0.9149
europe-west2$1.0714
europe-west3$1.0714
europe-west4$0.9155
europe-west6$1.1636
europe-west9$1.1643
e2-highcpu-2approximations:
europe-west1$0.0626
europe-west2$0.0733
europe-west3$0.0733
europe-west4$0.0626
europe-west6$0.0796
europe-west9$0.0796
e2-highcpu-4approximations:
europe-west1$0.1252
europe-west2$0.1466
europe-west3$0.1466
europe-west4$0.1253
europe-west6$0.1592
europe-west9$0.1593
e2-highcpu-8approximations:
europe-west1$0.2503
europe-west2$0.2932
europe-west3$0.2932
europe-west4$0.2505
europe-west6$0.3184
europe-west9$0.3186
e2-highcpu-16approximations:
europe-west1$0.5006
europe-west2$0.5864
europe-west3$0.5864
europe-west4$0.501
europe-west6$0.6368
europe-west9$0.6372
e2-highcpu-32approximations:
europe-west1$1.0013
europe-west2$1.1728
europe-west3$1.1728
europe-west4$1.0021
europe-west6$1.2736
europe-west9$1.2743

N1 Series

n1-standard-2approximations:

europe-west2$0.1408
Other Europe regions$0.1265
n1-standard-4approximations:
europe-west2$0.2815
Other Europe regions$0.2531
n1-standard-8approximations:
europe-west2$0.563
Other Europe regions$0.5061
n1-standard-16approximations:
europe-west2$1.126
Other Europe regions$1.0123
n1-standard-32approximations:
europe-west2$2.2521
Other Europe regions$2.0245
n1-highmem-2approximations:
europe-west2$0.1753
Other Europe regions$0.1575
n1-highmem-4approximations:
europe-west2$0.3506
Other Europe regions$0.3151
n1-highmem-8approximations:
europe-west2$0.7011
Other Europe regions$0.6302
n1-highmem-16approximations:
europe-west2$1.4022
Other Europe regions$1.2603
n1-highcpu-2approximations:
europe-west2$0.105
Other Europe regions$0.0944
n1-highcpu-4approximations:
europe-west2$0.21
Other Europe regions$0.1888
n1-highcpu-8approximations:
europe-west2$0.4199
Other Europe regions$0.3776
n1-highcpu-16approximations:
europe-west2$0.8398
Other Europe regions$0.7552
n1-highcpu-32approximations:
europe-west2$1.6796
Other Europe regions$1.5104

N2 Series

n2-standard-2approximations:

europe_central2$0.1439
europe_west1$0.1229
europe_west2$0.1439
europe_west3$0.1439
europe_west4$0.1229
europe_west6$0.1564
europe_west9$0.1296
n2-standard-4approximations:
europe_central2$0.2878
europe_west1$0.2457
europe_west2$0.2878
europe_west3$0.2878
europe_west4$0.2457
europe_west6$0.3127
europe_west9$0.2591
n2-standard-8approximations:
europe_central2$0.5756
europe_west1$0.4914
europe_west2$0.5756
europe_west3$0.5756
europe_west4$0.4914
europe_west6$0.6254
europe_west9$0.5182
n2-standard-16approximations:
europe_central2$1.1511
europe_west1$0.9829
europe_west2$1.1511
europe_west3$1.1511
europe_west4$0.9828
europe_west6$1.2508
europe_west9$1.0364
n2-standard-32approximations:
europe_central2$2.3023
europe_west1$1.9658
europe_west2$2.3023
europe_west3$2.3023
europe_west4$1.9657
europe_west6$2.5017
europe_west9$2.0729
n2-highmem-2approximations:
europe_central2$0.1941
europe_west1$0.1657
europe_west2$0.1941
europe_west3$0.1941
europe_west4$0.1657
europe_west6$0.2109
europe_west9$0.1748
n2-highmem-4approximations:
europe_central2$0.3882
europe_west1$0.3315
europe_west2$0.3882
europe_west3$0.3882
europe_west4$0.3315
europe_west6$0.4218
europe_west9$0.3495
n2-highmem-8approximations:
europe_central2$0.7764
europe_west1$0.663
europe_west2$0.7764
europe_west3$0.7764
europe_west4$0.6629
europe_west6$0.8436
europe_west9$0.6991
n2-highmem-16approximations:
europe_central2$1.5528
europe_west1$1.3259
europe_west2$1.5528
europe_west3$1.5528
europe_west4$1.3259
europe_west6$1.6873
europe_west9$1.3982
n2-highcpu-2approximations:
europe_central2$0.1062
europe_west1$0.0907
europe_west2$0.1062
europe_west3$0.1062
europe_west4$0.0907
europe_west6$0.1154
europe_west9$0.0956
n2-highcpu-4approximations:
europe_central2$0.2125
europe_west1$0.1814
europe_west2$0.2125
europe_west3$0.2125
europe_west4$0.1814
europe_west6$0.2309
europe_west9$0.1913
n2-highcpu-8approximations:
europe_central2$0.4249
europe_west1$0.3628
europe_west2$0.4249
europe_west3$0.4249
europe_west4$0.3628
europe_west6$0.4617
europe_west9$0.3826
n2-highcpu-16approximations:
europe_central2$0.8499
europe_west1$0.7256
europe_west2$0.8499
europe_west3$0.8499
europe_west4$0.7256
europe_west6$0.9235
europe_west9$0.7651
n2-highcpu-32approximations:
europe_central2$1.6997
europe_west1$1.4512
europe_west2$1.6997
europe_west3$1.6997
europe_west4$1.4511
europe_west6$1.847
europe_west9$1.5303

N2D Series

n2d-standard-2approximations:

europe_west1$0.1069
europe_west2$0.1252
europe_west3$0.1252
europe_west4$0.107
europe_west9$0.1127
n2d-standard-4approximations:
europe_west1$0.2138
europe_west2$0.2504
europe_west3$0.2504
europe_west4$0.2139
europe_west9$0.2254
n2d-standard-8approximations:
europe_west1$0.4275
europe_west2$0.5007
europe_west3$0.5007
europe_west4$0.4279
europe_west9$0.4509
n2d-standard-16approximations:
europe_west1$0.8551
europe_west2$1.0015
europe_west3$1.0015
europe_west4$0.8558
europe_west9$0.9017
n2d-standard-32approximations:
europe_west1$1.7102
europe_west2$2.0029
europe_west3$2.0029
europe_west4$1.7116
europe_west9$1.8034
n2d-highmem-2approximations:
europe_west1$0.1442
europe_west2$0.1689
europe_west3$0.1689
europe_west4$0.1443
europe_west9$0.1521
n2d-highmem-4approximations:
europe_west1$0.2884
europe_west2$0.3377
europe_west3$0.3377
europe_west4$0.2886
europe_west9$0.3041
n2d-highmem-8approximations:
europe_west1$0.5768
europe_west2$0.6755
europe_west3$0.6755
europe_west4$0.5772
europe_west9$0.6082
n2d-highmem-16approximations:
europe_west1$1.1535
europe_west2$1.3509
europe_west3$1.3509
europe_west4$1.1545
europe_west9$1.2164
n2d-highcpu-2approximations:
europe_west1$0.0789
europe_west2$0.0924
europe_west3$0.0924
europe_west4$0.079
europe_west9$0.0832
n2d-highcpu-4approximations:
europe_west1$0.1578
europe_west2$0.1848
europe_west3$0.1848
europe_west4$0.1579
europe_west9$0.1664
n2d-highcpu-8approximations:
europe_west1$0.3156
europe_west2$0.3697
europe_west3$0.3697
europe_west4$0.3159
europe_west9$0.3328
n2d-highcpu-16approximations:
europe_west1$0.6313
europe_west2$0.7394
europe_west3$0.7394
europe_west4$0.6318
europe_west9$0.6657
n2d-highcpu-32approximations:
europe_west1$1.2625
europe_west2$1.4787
europe_west3$1.4787
europe_west4$1.2636
europe_west9$1.3314

C2 Series

c2-standard-4approximations:

europe_west1$0.2641
europe_west2$0.3094
europe_west3$0.3092
europe_west4$0.2643
europe_west6$0.3362
c2-standard-8approximations:
europe_west1$0.5283
europe_west2$0.6187
europe_west3$0.6184
europe_west4$0.5285
europe_west6$0.6724
c2-standard-16approximations:
europe_west1$1.0565
europe_west2$1.2375
europe_west3$1.2368
europe_west4$1.0571
europe_west6$1.3449
c2-standard-30approximations:
europe_west1$1.981
europe_west2$2.3202
europe_west3$2.3191
europe_west4$1.982
europe_west6$2.5216
c2-standard-60approximations:
europe_west1$3.962
europe_west2$4.6404
europe_west3$4.6382
europe_west4$3.964
europe_west6$5.0432

C2D Series

c2d-standard-2approximations:

europe_west1$0.115
europe_west2$0.1345
europe_west3$0.1345
europe_west4$0.115
c2d-standard-4approximations:
europe_west1$0.2299
europe_west2$0.269
europe_west3$0.269
europe_west4$0.2299
c2d-standard-8approximations:
europe_west1$0.4599
europe_west2$0.5381
europe_west3$0.5381
europe_west4$0.4599
c2d-standard-16approximations:
europe_west1$0.9198
europe_west2$1.0762
europe_west3$1.0762
europe_west4$0.9198
c2d-standard-32approximations:
europe_west1$1.8395
europe_west2$2.1524
europe_west3$2.1524
europe_west4$1.8395
c2d-standard-56approximations:
europe_west1$3.2191
europe_west2$3.7666
europe_west3$3.7666
europe_west4$3.2191
c2d-standard-112approximations:
europe_west1$6.4383
europe_west2$7.5333
europe_west3$7.5333
europe_west4$6.4383
c2d-highmem-2approximations:
europe_west1$0.1551
europe_west2$0.1814
europe_west3$0.1814
europe_west4$0.1551
c2d-highmem-4approximations:
europe_west1$0.3101
europe_west2$0.3629
europe_west3$0.3629
europe_west4$0.3101
c2d-highmem-8approximations:
europe_west1$0.6203
europe_west2$0.7258
europe_west3$0.7258
europe_west4$0.6203
c2d-highmem-16approximations:
europe_west1$1.2406
europe_west2$1.4515
europe_west3$1.4515
europe_west4$1.2406
c2d-highmem-32approximations:
europe_west1$2.4812
europe_west2$2.9031
europe_west3$2.9031
europe_west4$2.4812
c2d-highmem-56approximations:
europe_west1$4.342
europe_west2$5.0804
europe_west3$5.0804
europe_west4$4.342
c2d-highmem-112approximations:
europe_west1$8.684
europe_west2$10.1608
europe_west3$10.1608
europe_west4$8.684
c2d-highcpu-2approximations:
europe_west1$0.0949
europe_west2$0.1111
europe_west3$0.1111
europe_west4$0.0949
c2d-highcpu-4approximations:
europe_west1$0.1898
europe_west2$0.2221
europe_west3$0.2221
europe_west4$0.1898
c2d-highcpu-8approximations:
europe_west1$0.3797
europe_west2$0.4442
europe_west3$0.4442
europe_west4$0.3797
c2d-highcpu-16approximations:
europe_west1$0.7593
europe_west2$0.8885
europe_west3$0.8885
europe_west4$0.7593
c2d-highcpu-32approximations:
europe_west1$1.5187
europe_west2$1.777
europe_west3$1.777
europe_west4$1.5187
c2d-highcpu-56approximations:
europe_west1$2.6577
europe_west2$3.1097
europe_west3$3.1097
europe_west4$2.6577
c2d-highcpu-112approximations:
europe_west1$5.3154
europe_west2$6.2195
europe_west3$6.2195
europe_west4$5.3154

C3 Series

c3-highcpu-4approximations:

europe_west1$0.218
europe_west4$0.2182
c3-highcpu-8approximations:
europe_west1$0.4361
europe_west4$0.4365
c3-highcpu-22approximations:
europe_west1$1.1992
europe_west4$1.2003
c3-highcpu-44approximations:
europe_west1$2.3984
europe_west4$2.4006
c3-highcpu-88approximations:
europe_west1$4.7969
europe_west4$4.8013
c3-highcpu-176approximations:
europe_west1$9.5938
europe_west4$9.6026

A2 Series

a2-highgpu-1gapproximations:

europe-west4$4.3103
a2-highgpu-2gapproximations:
europe-west4$8.6205
a2-highgpu-4gapproximations:
europe-west4$17.2411
a2-highgpu-8gapproximations:
europe-west4$34.4822
a2-megagpu-16gapproximations:
europe-west4$65.1222
a2-ultragpu-1gapproximations:
europe-west4$6.3661
a2-ultragpu-2gapproximations:
europe-west4$12.7321
a2-ultragpu-4gapproximations:
europe-west4$25.4643
a2-ultragpu-8gapproximations:
europe-west4$50.9286

G2 Series

g2-standard-4approximations:

europe-west4$0.8951
g2-standard-8approximations:
europe-west4$1.081
g2-standard-12approximations:
europe-west4$1.2669
g2-standard-16approximations:
europe-west4$1.4528
g2-standard-24approximations:
europe-west4$2.5338
g2-standard-32approximations:
europe-west4$2.1965
g2-standard-48approximations:
europe-west4$5.0677
g2-standard-96approximations:
europe-west4$10.1354

Pricing for Asia Pacific

The following tables provide the price per node hour for each machine type.

E2 Series

e2-standard-2approximations:

asia-east1$0.0892
asia-east2$0.1078
asia-northeast1$0.0989
asia-northeast3$0.0989
asia-south1$0.0926
asia-southeast1$0.0951
australia-southeast1$0.1093
e2-standard-4approximations:
asia-east1$0.1785
asia-east2$0.2156
asia-northeast1$0.1977
asia-northeast3$0.1977
asia-south1$0.1851
asia-southeast1$0.1901
australia-southeast1$0.2187
e2-standard-8approximations:
asia-east1$0.3569
asia-east2$0.4313
asia-northeast1$0.3954
asia-northeast3$0.3954
asia-south1$0.3702
asia-southeast1$0.3802
australia-southeast1$0.4373
e2-standard-16approximations:
asia-east1$0.7138
asia-east2$0.8626
asia-northeast1$0.7909
asia-northeast3$0.7909
asia-south1$0.7405
asia-southeast1$0.7605
australia-southeast1$0.8747
e2-standard-32approximations:
asia-east1$1.4276
asia-east2$1.7251
asia-northeast1$1.5817
asia-northeast3$1.5817
asia-south1$1.4809
asia-southeast1$1.5209
australia-southeast1$1.7494
e2-highmem-2approximations:
asia-east1$0.1204
asia-east2$0.1454
asia-northeast1$0.1333
asia-northeast3$0.1333
asia-south1$0.1249
asia-southeast1$0.1282
australia-southeast1$0.1475
e2-highmem-4approximations:
asia-east1$0.2407
asia-east2$0.2909
asia-northeast1$0.2665
asia-northeast3$0.2665
asia-south1$0.2497
asia-southeast1$0.2564
australia-southeast1$0.295
e2-highmem-8approximations:
asia-east1$0.4815
asia-east2$0.5818
asia-northeast1$0.533
asia-northeast3$0.533
asia-south1$0.4994
asia-southeast1$0.5129
australia-southeast1$0.59
e2-highmem-16approximations:
asia-east1$0.963
asia-east2$1.1636
asia-northeast1$1.0661
asia-northeast3$1.0661
asia-south1$0.9989
asia-southeast1$1.0258
australia-southeast1$1.1799
e2-highcpu-2approximations:
asia-east1$0.0659
asia-east2$0.0796
asia-northeast1$0.0731
asia-northeast3$0.0731
asia-south1$0.0683
asia-southeast1$0.0702
australia-southeast1$0.0807
e2-highcpu-4approximations:
asia-east1$0.1317
asia-east2$0.1592
asia-northeast1$0.1461
asia-northeast3$0.1461
asia-south1$0.1367
asia-southeast1$0.1404
australia-southeast1$0.1614
e2-highcpu-8approximations:
asia-east1$0.2635
asia-east2$0.3184
asia-northeast1$0.2922
asia-northeast3$0.2922
asia-south1$0.2733
asia-southeast1$0.2807
australia-southeast1$0.3229
e2-highcpu-16approximations:
asia-east1$0.527
asia-east2$0.6368
asia-northeast1$0.5845
asia-northeast3$0.5845
asia-south1$0.5467
asia-southeast1$0.5615
australia-southeast1$0.6458
e2-highcpu-32approximations:
asia-east1$1.0539
asia-east2$1.2736
asia-northeast1$1.169
asia-northeast3$1.169
asia-south1$1.0933
asia-southeast1$1.1229
australia-southeast1$1.2916

N1 Series

n1-standard-2approximations:

asia-northeast1$0.1402
asia-southeast1$0.1348
australia-southeast1$0.155
Other Asia Pacific regions$0.1265
n1-standard-4approximations:
asia-northeast1$0.2803
asia-southeast1$0.2695
australia-southeast1$0.31
Other Asia Pacific regions$0.2531
n1-standard-8approximations:
asia-northeast1$0.5606
asia-southeast1$0.5391
australia-southeast1$0.6201
Other Asia Pacific regions$0.5061
n1-standard-16approximations:
asia-northeast1$1.1213
asia-southeast1$1.0782
australia-southeast1$1.2401
Other Asia Pacific regions$1.0123
n1-standard-32approximations:
asia-northeast1$2.2426
asia-southeast1$2.1564
australia-southeast1$2.4802
Other Asia Pacific regions$2.0245
n1-highmem-2approximations:
asia-northeast1$0.1744
asia-southeast1$0.1678
australia-southeast1$0.193
Other Asia Pacific regions$0.1575
n1-highmem-4approximations:
asia-northeast1$0.3489
asia-southeast1$0.3357
australia-southeast1$0.3861
Other Asia Pacific regions$0.3151
n1-highmem-8approximations:
asia-northeast1$0.6977
asia-southeast1$0.6713
australia-southeast1$0.7721
Other Asia Pacific regions$0.6302
n1-highmem-16approximations:
asia-northeast1$1.3955
asia-southeast1$1.3426
australia-southeast1$1.5443
Other Asia Pacific regions$1.2603
n1-highcpu-2approximations:
asia-northeast1$0.1046
asia-southeast1$0.1005
australia-southeast1$0.1156
Other Asia Pacific regions$0.0944
n1-highcpu-4approximations:
asia-northeast1$0.2093
asia-southeast1$0.201
australia-southeast1$0.2312
Other Asia Pacific regions$0.1888
n1-highcpu-8approximations:
asia-northeast1$0.4186
asia-southeast1$0.4021
australia-southeast1$0.4624
Other Asia Pacific regions$0.3776
n1-highcpu-16approximations:
asia-northeast1$0.8371
asia-southeast1$0.8041
australia-southeast1$0.9249
Other Asia Pacific regions$0.7552
n1-highcpu-32approximations:
asia-northeast1$1.6742
asia-southeast1$1.6082
australia-southeast1$1.8498
Other Asia Pacific regions$1.5104

N2 Series

n2-standard-2approximations:

asia_east1$0.1293
asia_east2$0.1563
asia_northeast1$0.1433
asia_northeast3$0.1433
asia_south1$0.1341
asia_southeast1$0.1378
asia_southeast2$0.1502
australia_southeast1$0.1585
n2-standard-4approximations:
asia_east1$0.2586
asia_east2$0.3125
asia_northeast1$0.2866
asia_northeast3$0.2866
asia_south1$0.2683
asia_southeast1$0.2756
asia_southeast2$0.3003
australia_southeast1$0.3169
n2-standard-8approximations:
asia_east1$0.5173
asia_east2$0.6251
asia_northeast1$0.5731
asia_northeast3$0.5731
asia_south1$0.5366
asia_southeast1$0.5511
asia_southeast2$0.6007
australia_southeast1$0.6339
n2-standard-16approximations:
asia_east1$1.0346
asia_east2$1.2502
asia_northeast1$1.1462
asia_northeast3$1.1462
asia_south1$1.0731
asia_southeast1$1.1022
asia_southeast2$1.2014
australia_southeast1$1.2678
n2-standard-32approximations:
asia_east1$2.0691
asia_east2$2.5003
asia_northeast1$2.2924
asia_northeast3$2.2924
asia_south1$2.1462
asia_southeast1$2.2044
asia_southeast2$2.4028
australia_southeast1$2.5355
n2-highmem-2approximations:
asia_east1$0.1745
asia_east2$0.2108
asia_northeast1$0.1931
asia_northeast3$0.1931
asia_south1$0.181
asia_southeast1$0.1859
asia_southeast2$0.2026
australia_southeast1$0.2138
n2-highmem-4approximations:
asia_east1$0.3489
asia_east2$0.4216
asia_northeast1$0.3863
asia_northeast3$0.3863
asia_south1$0.3619
asia_southeast1$0.3717
asia_southeast2$0.4052
australia_southeast1$0.4275
n2-highmem-8approximations:
asia_east1$0.6978
asia_east2$0.8432
asia_northeast1$0.7725
asia_northeast3$0.7725
asia_south1$0.7238
asia_southeast1$0.7434
asia_southeast2$0.8103
australia_southeast1$0.8551
n2-highmem-16approximations:
asia_east1$1.3956
asia_east2$1.6865
asia_northeast1$1.545
asia_northeast3$1.545
asia_south1$1.4476
asia_southeast1$1.4868
asia_southeast2$1.6206
australia_southeast1$1.7102
n2-highcpu-2approximations:
asia_east1$0.0955
asia_east2$0.1154
asia_northeast1$0.1059
asia_northeast3$0.1059
asia_south1$0.099
asia_southeast1$0.1017
asia_southeast2$0.1109
australia_southeast1$0.117
n2-highcpu-4approximations:
asia_east1$0.1909
asia_east2$0.2307
asia_northeast1$0.2118
asia_northeast3$0.2118
asia_south1$0.1981
asia_southeast1$0.2034
asia_southeast2$0.2217
australia_southeast1$0.234
n2-highcpu-8approximations:
asia_east1$0.3819
asia_east2$0.4615
asia_northeast1$0.4235
asia_northeast3$0.4235
asia_south1$0.3961
asia_southeast1$0.4069
asia_southeast2$0.4435
australia_southeast1$0.468
n2-highcpu-16approximations:
asia_east1$0.7637
asia_east2$0.9229
asia_northeast1$0.8471
asia_northeast3$0.8471
asia_south1$0.7923
asia_southeast1$0.8137
asia_southeast2$0.887
australia_southeast1$0.936
n2-highcpu-32approximations:
asia_east1$1.5275
asia_east2$1.8458
asia_northeast1$1.6942
asia_northeast3$1.6942
asia_south1$1.5845
asia_southeast1$1.6275
asia_southeast2$1.7739
australia_southeast1$1.8719

N2D Series

n2d-standard-2approximations:

asia_east1$0.1125
asia_east2$0.136
asia_northeast1$0.1247
asia_south1$0.0641
asia_southeast1$0.1199
australia_southeast1$0.1379
n2d-standard-4approximations:
asia_east1$0.225
asia_east2$0.2719
asia_northeast1$0.2493
asia_south1$0.1283
asia_southeast1$0.2397
australia_southeast1$0.2757
n2d-standard-8approximations:
asia_east1$0.45
asia_east2$0.5438
asia_northeast1$0.4986
asia_south1$0.2565
asia_southeast1$0.4795
australia_southeast1$0.5515
n2d-standard-16approximations:
asia_east1$0.9001
asia_east2$1.0876
asia_northeast1$0.9972
asia_south1$0.513
asia_southeast1$0.959
australia_southeast1$1.103
n2d-standard-32approximations:
asia_east1$1.8001
asia_east2$2.1752
asia_northeast1$1.9945
asia_south1$1.0261
asia_southeast1$1.9179
australia_southeast1$2.206
n2d-highmem-2approximations:
asia_east1$0.1518
asia_east2$0.1834
asia_northeast1$0.168
asia_south1$0.0865
asia_southeast1$0.1617
australia_southeast1$0.186
n2d-highmem-4approximations:
asia_east1$0.3035
asia_east2$0.3668
asia_northeast1$0.3361
asia_south1$0.173
asia_southeast1$0.3234
australia_southeast1$0.372
n2d-highmem-8approximations:
asia_east1$0.6071
asia_east2$0.7336
asia_northeast1$0.6721
asia_south1$0.346
asia_southeast1$0.6468
australia_southeast1$0.744
n2d-highmem-16approximations:
asia_east1$1.2142
asia_east2$1.4672
asia_northeast1$1.3443
asia_south1$0.6921
asia_southeast1$1.2936
australia_southeast1$1.4879
n2d-highcpu-2approximations:
asia_east1$0.0831
asia_east2$0.1004
asia_northeast1$0.0921
asia_south1$0.0473
asia_southeast1$0.0885
australia_southeast1$0.1018
n2d-highcpu-4approximations:
asia_east1$0.1661
asia_east2$0.2007
asia_northeast1$0.1842
asia_south1$0.0947
asia_southeast1$0.177
australia_southeast1$0.2036
n2d-highcpu-8approximations:
asia_east1$0.3322
asia_east2$0.4015
asia_northeast1$0.3685
asia_south1$0.1894
asia_southeast1$0.354
australia_southeast1$0.4071
n2d-highcpu-16approximations:
asia_east1$0.6645
asia_east2$0.8029
asia_northeast1$0.737
asia_south1$0.3787
asia_southeast1$0.708
australia_southeast1$0.8143
n2d-highcpu-32approximations:
asia_east1$1.3289
asia_east2$1.6059
asia_northeast1$1.4739
asia_south1$0.7575
asia_southeast1$1.4159
australia_southeast1$1.6286

C2 Series

c2-standard-4approximations:

asia_east1$0.278
asia_east2$0.336
asia_northeast1$0.308
asia_northeast3$0.308
asia_south1$0.2884
asia_southeast1$0.2962
australia_southeast1$0.3407
c2-standard-8approximations:
asia_east1$0.5561
asia_east2$0.672
asia_northeast1$0.6161
asia_northeast3$0.6161
asia_south1$0.5768
asia_southeast1$0.5924
australia_southeast1$0.6814
c2-standard-16approximations:
asia_east1$1.1122
asia_east2$1.3439
asia_northeast1$1.2321
asia_northeast3$1.2321
asia_south1$1.1536
asia_southeast1$1.1849
australia_southeast1$1.3629
c2-standard-30approximations:
asia_east1$2.0853
asia_east2$2.5199
asia_northeast1$2.3103
asia_northeast3$2.3103
asia_south1$2.1631
asia_southeast1$2.2217
australia_southeast1$2.5553
c2-standard-60approximations:
asia_east1$4.1706
asia_east2$5.0397
asia_northeast1$4.6205
asia_northeast3$4.6205
asia_south1$4.3262
asia_southeast1$4.4433
australia_southeast1$5.1107

C2D Series

c2d-standard-2approximations:

asia_east1$0.1209
asia_south1$0.0689
asia_southeast1$0.1288
c2d-standard-4approximations:
asia_east1$0.2418
asia_south1$0.1378
asia_southeast1$0.2576
c2d-standard-8approximations:
asia_east1$0.4836
asia_south1$0.2757
asia_southeast1$0.5153
c2d-standard-16approximations:
asia_east1$0.9672
asia_south1$0.5513
asia_southeast1$1.0305
c2d-standard-32approximations:
asia_east1$1.9345
asia_south1$1.1027
asia_southeast1$2.0611
c2d-standard-56approximations:
asia_east1$3.3853
asia_south1$1.9297
asia_southeast1$3.6069
c2d-standard-112approximations:
asia_east1$6.7706
asia_south1$3.8593
asia_southeast1$7.2137
c2d-highmem-2approximations:
asia_east1$0.1631
asia_south1$0.093
asia_southeast1$0.1737
c2d-highmem-4approximations:
asia_east1$0.3262
asia_south1$0.1859
asia_southeast1$0.3475
c2d-highmem-8approximations:
asia_east1$0.6523
asia_south1$0.3718
asia_southeast1$0.695
c2d-highmem-16approximations:
asia_east1$1.3046
asia_south1$0.7436
asia_southeast1$1.39
c2d-highmem-32approximations:
asia_east1$2.6092
asia_south1$1.4873
asia_southeast1$2.78
c2d-highmem-56approximations:
asia_east1$4.5662
asia_south1$2.6028
asia_southeast1$4.865
c2d-highmem-112approximations:
asia_east1$9.1323
asia_south1$5.2055
asia_southeast1$9.7299
c2d-highcpu-2approximations:
asia_east1$0.0998
asia_south1$0.0569
asia_southeast1$0.1063
c2d-highcpu-4approximations:
asia_east1$0.1996
asia_south1$0.1138
asia_southeast1$0.2127
c2d-highcpu-8approximations:
asia_east1$0.3993
asia_south1$0.2276
asia_southeast1$0.4254
c2d-highcpu-16approximations:
asia_east1$0.7985
asia_south1$0.4552
asia_southeast1$0.8508
c2d-highcpu-32approximations:
asia_east1$1.5971
asia_south1$0.9104
asia_southeast1$1.7016
c2d-highcpu-56approximations:
asia_east1$2.7949
asia_south1$1.5931
asia_southeast1$2.9778
c2d-highcpu-112approximations:
asia_east1$5.5898
asia_south1$3.1862
asia_southeast1$5.9556

C3 Series

c3-highcpu-4approximations:

asia_southeast1$0.2445
c3-highcpu-8approximations:
asia_southeast1$0.489
c3-highcpu-22approximations:
asia_southeast1$1.3449
c3-highcpu-44approximations:
asia_southeast1$2.6897
c3-highcpu-88approximations:
asia_southeast1$5.3794
c3-highcpu-176approximations:
asia_southeast1$10.7589

A2 Series

a2-highgpu-1gapproximations:

asia-northeast1$4.6575
asia-northeast3$4.6575
asia-southeast1$4.6163
a2-highgpu-2gapproximations:
asia-northeast1$9.3151
asia-northeast3$9.3151
asia-southeast1$9.2327
a2-highgpu-4gapproximations:
asia-northeast1$18.6301
asia-northeast3$18.6301
asia-southeast1$18.4653
a2-highgpu-8gapproximations:
asia-northeast1$37.2603
asia-northeast3$37.2603
asia-southeast1$36.9306
a2-megagpu-16gapproximations:
asia-northeast1$70.0363
asia-northeast3$70.0363
asia-southeast1$69.5557
a2-ultragpu-1gapproximations:
asia-southeast1$7.1328
a2-ultragpu-2gapproximations:
asia-southeast1$14.2657
a2-ultragpu-4gapproximations:
asia-southeast1$28.5314
a2-ultragpu-8gapproximations:
asia-southeast1$57.0628

Pricing for the Middle East

N2 Series

n2-standard-2approximations:

me_west1$0.1229
n2-standard-4approximations:
me_west1$0.2457
n2-standard-8approximations:
me_west1$0.4914
n2-standard-16approximations:
me_west1$0.9828
n2-standard-32approximations:
me_west1$1.9657
n2-highmem-2approximations:
me_west1$0.1657
n2-highmem-4approximations:
me_west1$0.3315
n2-highmem-8approximations:
me_west1$0.6629
n2-highmem-16approximations:
me_west1$1.3259
n2-highcpu-2approximations:
me_west1$0.0907
n2-highcpu-4approximations:
me_west1$0.1814
n2-highcpu-8approximations:
me_west1$0.3628
n2-highcpu-16approximations:
me_west1$0.7256
n2-highcpu-32approximations:
me_west1$1.4511

N2D Series

n2d-standard-2approximations:

me_west1$0.1069
n2d-standard-4approximations:
me_west1$0.2138
n2d-standard-8approximations:
me_west1$0.4275
n2d-standard-16approximations:
me_west1$0.8551
n2d-standard-32approximations:
me_west1$1.7101
n2d-highmem-2approximations:
me_west1$0.1442
n2d-highmem-4approximations:
me_west1$0.2884
n2d-highmem-8approximations:
me_west1$0.5767
n2d-highmem-16approximations:
me_west1$1.1535
n2d-highcpu-2approximations:
me_west1$0.0789
n2d-highcpu-4approximations:
me_west1$0.1578
n2d-highcpu-8approximations:
me_west1$0.3156
n2d-highcpu-16approximations:
me_west1$0.6312
n2d-highcpu-32approximations:
me_west1$1.2625

Each machine type is charged as the following SKUs on your Google Cloud bill:

  • vCPU cost: measured in vCPU hours
  • RAM cost: measured in GB hours
  • GPU cost: if either built into the machine or optionally configured, measured in GPU hours

The prices for machine types are used to approximate the total hourly cost for each prediction node of a model version using that machine type.

For example, a machine type of n1-highcpu-32 includes 32 vCPUs and 32 GB of RAM. Therefore, the hourly pricing equals 32 vCPU hours + 32 GB hours.

The SKU pricing table is available by region. Each table shows the vCPU, RAM, and built-in GPU pricing for prediction machine types, which more precisely reflect the SKUs charged.

To view the SKU pricing per region, choose a region to view its pricing table:

SKU pricing for the Americas

E2 Series

vCPU

Location Price per hour
Los Angeles (us-west2) $0.0301288 per vCPU hour
Las Vegas (us-west4) $0.028252 per vCPU hour
N. Virginia (us-east4) $0.028252 per vCPU hour
Montréal (northamerica-northeast1) $0.0276149 per vCPU hour
Toronto (northamerica-northeast2) $0.0276149 per vCPU hour
Sao Paulo (southamerica-east1) $0.0398176 per vCPU hour
Other Americas regions $0.0250826 per vCPU hour

RAM

Location Price per hour
Los Angeles (us-west2) $0.0040376 per GB hour
Las Vegas (us-west4</code>) $0.0037846 per GB hour
N. Virginia (us-east4) $0.0037846 per GB hour
Montréal (northamerica-northeast1) $0.0037007 per GB hour
Toronto (northamerica-northeast2) $0.0037007 per GB hour
Sao Paulo (southamerica-east1) $0.005336 per GB hour
Other Americas regions $0.0033614 per GB hour

N1 Series

vCPU

Location Price per hour
N. Virginia (us-east4) $0.04094575 per vCPU hour
Montréal (northamerica-northeast1) $0.0400223 per vCPU hour
Other Americas regions $0.03635495 per vCPU hour

RAM

Location Price per hour
N. Virginia (us-east4) $0.00548665 per GB hour
Montréal (northamerica-northeast1) $0.0053636 per GB hour
Other Americas regions $0.0048783 per GB hour

N2 Series

vCPU

Location Price per hour
Montréal (northamerica_northeast1) $0.0400223 per vCPU hour
Toronto (northamerica_northeast2) $0.0400223 per vCPU hour
São Paulo (southamerica_east1) $0.057707 per vCPU hour
Iowa (us_central1) $0.0363527 per vCPU hour
South Carolina (us_east1) $0.0363527 per vCPU hour
N. Virginia (us_east4) $0.0409457 per vCPU hour
Dallas (us_south1) $0.0428962 per vCPU hour
Oregon (us_west1) $0.0363527 per vCPU hour
Los Angeles (us_west2) $0.0436655 per vCPU hour
Salt Lake City (us_west3) $0.0436655 per vCPU hour
Las Vegas (us_west4) $0.0409434 per vCPU hour

RAM

Location Price per hour
Montréal (northamerica_northeast1) $0.0053636 per GB hour
Toronto (northamerica_northeast2) $0.0053636 per GB hour
São Paulo (southamerica_east1) $0.0077337 per GB hour
Iowa (us_central1) $0.0048725 per GB hour
South Carolina (us_east1) $0.0048725 per GB hour
N. Virginia (us_east4) $0.0054867 per GB hour
Dallas (us_south1) $0.00575 per GB hour
Oregon (us_west1) $0.0048725 per GB hour
Los Angeles (us_west2) $0.0058523 per GB hour
Salt Lake City (us_west3) $0.0058523 per GB hour
Las Vegas (us_west4) $0.0054867 per GB hour

N2D Series

vCPU

Location Price per hour
Montréal (northamerica_northeast1) $0.0348197 per vCPU hour
São Paulo (southamerica_east1) $0.0502055 per vCPU hour
Iowa (us_central1) $0.0316273 per vCPU hour
South Carolina (us_east1) $0.0316273 per vCPU hour
N. Virginia (us_east4) $0.0356224 per vCPU hour
Oregon (us_west1) $0.0316273 per vCPU hour
Los Angeles (us_west2) $0.0379891 per vCPU hour
Las Vegas (us_west4) $0.0356224 per vCPU hour

RAM

Location Price per hour
Montréal (northamerica_northeast1) $0.0046667 per GB hour
São Paulo (southamerica_east1) $0.0067287 per GB hour
Iowa (us_central1) $0.0042389 per GB hour
South Carolina (us_east1) $0.0042389 per GB hour
N. Virginia (us_east4) $0.0047736 per GB hour
Oregon (us_west1) $0.0042389 per GB hour
Los Angeles (us_west2) $0.005091 per GB hour
Las Vegas (us_west4) $0.0047736 per GB hour

C2 Series

vCPU

Location Price per hour
Montréal (northamerica_northeast1) $0.04301 per vCPU hour
São Paulo (southamerica_east1) $0.0620356 per vCPU hour
Iowa (us_central1) $0.039077 per vCPU hour
South Carolina (us_east1) $0.039077 per vCPU hour
N. Virginia (us_east4) $0.0440105 per vCPU hour
Oregon (us_west1) $0.039077 per vCPU hour
Los Angeles (us_west2) $0.046943 per vCPU hour
Salt Lake City (us_west3) $0.04692 per vCPU hour
Las Vegas (us_west4) $0.0440105 per vCPU hour

RAM

Location Price per hour
Montréal (northamerica_northeast1) $0.00575 per GB hour
São Paulo (southamerica_east1) $0.0083133 per GB hour
Iowa (us_central1) $0.0052325 per GB hour
South Carolina (us_east1) $0.0052325 per GB hour
N. Virginia (us_east4) $0.005888 per GB hour
Oregon (us_west1) $0.0052325 per GB hour
Los Angeles (us_west2) $0.0062905 per GB hour
Salt Lake City (us_west3) $0.006325 per GB hour
Las Vegas (us_west4) $0.005888 per GB hour

C2D Series

vCPU

Location Price per hour
Iowa (us_central1) $0.0339974 per vCPU hour
South Carolina (us_east1) $0.0339974 per vCPU hour
N. Virginia (us_east4) $0.0382904 per vCPU hour

RAM

Location Price per hour
Iowa (us_central1) $0.0045528 per GB hour
South Carolina (us_east1) $0.0045528 per GB hour
N. Virginia (us_east4) $0.0051267 per GB hour

C3 Series

vCPU

Location Price per hour
Iowa (us_central1) $0.03908 per vCPU hour
South Carolina (us_east1) $0.03908 per vCPU hour
N. Virginia (us_east4) $0.04401 per vCPU hour

RAM

Location Price per hour
Iowa (us_central1) $0.00524 per GB hour
South Carolina (us_east1) $0.00524 per GB hour
N. Virginia (us_east4) $0.0059 per GB hour

A2 Series

vCPU

Location Price per hour
Iowa (us-central1) $0.0363527 per vCPU hour
N. Virginia (us-east4) $0.0363527 per vCPU hour
Las Vegas (us-west4) $0.0409457 per vCPU hour
Other Americas regions $0.0363527 per vCPU hour

RAM

Location Price per hour
Iowa (us-central1) $0.0048725 per GB hour
N. Virginia (us-east4) $0.0048725 per GB hour
Las Vegas (us-west4) $0.0054867 per GB hour
Other Americas regions $0.0048725 per GB hour

GPU

Location Price per hour
Iowa (us-central1) $4.51729 per GPU hour (A100 80GB)
N. Virginia (us-east4) $5.08783 per GPU hour (A100 80GB)
Las Vegas (us-west4) $3.5673 per GPU hour (A100 40GB)
Other Americas regions $3.3741 per GPU hour (A100 40GB)

A3 Series

vCPU

Location Price per hour
Iowa (us-central1) $0.0293227 per vCPU hour
N. Virginia (us-east4) $0.0293227 per vCPU hour

RAM

Location Price per hour
Iowa (us-central1) $0.0025534 per GB hour
N. Virginia (us-east4) $0.0025534 per GB hour

GPU

Location Price per hour
Iowa (us-central1) $11.2660332 per GPU hour (H100 80GB)
N. Virginia (us-east4) $11.2660336 per GPU hour (H100 80GB)

G2 Series

vCPU

Location Price per hour
Iowa (us-central1) $0.02874 per vCPU hour

RAM

Location Price per hour
Iowa (us-central1) $0.00337 per GB hour

GPU

Location Price per hour
Iowa (us-central1) $0.64405 per GPU hour

SKU pricing for Europe

E2 Series

vCPU

Location Price per hour
Belgium (europe-west1) $0.0275919 per vCPU hour
London (europe-west2) $0.0323184 per vCPU hour
Frankfurt (europe-west3) $0.0323184 per vCPU hour
Netherlands (europe-west4) $0.0276149 per vCPU hour
Zurich (europe-west6) $0.0350968 per vCPU hour
Paris (europe-west9) $0.0351164 per vCPU hour

RAM

Location Price per hour
Belgium (europe-west1) $0.0036984 per GB hour
London (europe-west2) $0.0043309 per GB hour
Frankfurt (europe-west3) $0.0043309 per GB hour
Netherlands (europe-west4) $0.0037007 per GB hour
Zurich (europe-west6) $0.0047035 per GB hour
Paris (europe-west9) $0.0047069 per GB hour

N1 Series

vCPU

Location Price per hour
London (europe-west2) $0.0468395 per vCPU hour
Other Europe regions $0.0421268 per vCPU hour

RAM

Location Price per hour
London (europe-west2) $0.0062767 per GB hour
Other Europe regions $0.0056373 per GB hour

N2 Series

vCPU

Location Price per hour
Warsaw (europe_central2) $0.0468395 per vCPU hour
Belgium (europe_west1) $0.0399889 per vCPU hour
London (europe_west2) $0.0468395 per vCPU hour
Frankfurt (europe_west3) $0.0468395 per vCPU hour
Netherlands (europe_west4) $0.0399879 per vCPU hour
Zurich (europe_west6) $0.050899 per vCPU hour
Paris (europe_west9) $0.0421693 per vCPU hour

RAM

Location Price per hour
Warsaw (europe_central2) $0.0062767 per GB hour
Belgium (europe_west1) $0.0053602 per GB hour
London (europe_west2) $0.0062767 per GB hour
Frankfurt (europe_west3) $0.0062767 per GB hour
Netherlands (europe_west4) $0.0053598 per GB hour
Zurich (europe_west6) $0.0068195 per GB hour
Paris (europe_west9) $0.0056522 per GB hour

N2D Series

vCPU

Location Price per hour
Belgium (europe_west1) $0.0347909 per vCPU hour
London (europe_west2) $0.0407502 per vCPU hour
Frankfurt (europe_west3) $0.0407502 per vCPU hour
Netherlands (europe_west4) $0.0348197 per vCPU hour
Paris (europe_west9) $0.0366873 per vCPU hour

RAM

Location Price per hour
Belgium (europe_west1) $0.0046632 per GB hour
London (europe_west2) $0.0054602 per GB hour
Frankfurt (europe_west3) $0.0054602 per GB hour
Netherlands (europe_west4) $0.0046667 per GB hour
Paris (europe_west9) $0.0049174 per GB hour

C2 Series

vCPU

Location Price per hour
Belgium (europe_west1) $0.042987 per vCPU hour
London (europe_west2) $0.0503527 per vCPU hour
Frankfurt (europe_west3) $0.050347 per vCPU hour
Netherlands (europe_west4) $0.0430215 per vCPU hour
Zurich (europe_west6) $0.0547055 per vCPU hour

RAM

Location Price per hour
Belgium (europe_west1) $0.0057615 per GB hour
London (europe_west2) $0.006747 per GB hour
Frankfurt (europe_west3) $0.006739 per GB hour
Netherlands (europe_west4) $0.0057615 per GB hour
Zurich (europe_west6) $0.007337 per GB hour

C2D Series

vCPU

Location Price per hour
London (europe_west2) $0.0438012 per vCPU hour
Netherlands (europe_west4) $0.0374336 per vCPU hour

RAM

Location Price per hour
London (europe_west2) $0.005865 per GB hour
Netherlands (europe_west4) $0.0050128 per GB hour

C3 Series

vCPU

Location Price per hour
London (europe_west1) $0.04299 per vCPU hour
Netherlands (europe_west4) $0.04302 per vCPU hour

RAM

Location Price per hour
London (europe_west1) $0.00576 per GB hour
Netherlands (europe_west4) $0.00577 per GB hour

A2 Series

vCPU

Location Price per hour
Netherlands (europe-west4) $0.0400223 per vCPU hour

RAM

Location Price per hour
Netherlands (europe-west4) $0.0053636 per GB hour

GPU

Location Price per hour
Netherlands (europe-west4) $3.3741 per GPU hour (A100 40GB)
Netherlands (europe-west4) $4.97399 per GPU hour (A100 80GB)

G2 Series

vCPU

Location Price per hour
Netherlands (europe-west4) $0.03164 per vCPU hour

RAM

Location Price per hour
Netherlands (europe-west4) $0.00371 per GB hour

GPU

Location Price per hour
Netherlands (europe-west4) $0.70916 per GPU hour

SKU pricing for Asia Pacific

E2 Series

E2 prediction machine type SKUs

vCPU

Location Price per hour
Taiwan (asia-east1) $0.0290432 per vCPU hour
Hong Kong (asia-east2) $0.0350968 per vCPU hour
Tokyo (asia-northeast1) $0.0322299 per vCPU hour
Seoul (asia-northeast3) $0.0322299 per vCPU hour
Mumbai (asia-south1) $0.0301288 per vCPU hour
Singapore (asia-southeast1) $0.0309453 per vCPU hour
Sydney (australia-southeast1) $0.0355925 per vCPU hour

RAM

Location Price per hour
Taiwan (asia-east1) $0.0038927 per GB hour
Hong Kong (asia-east2) $0.0047035 per GB hour
Tokyo (asia-northeast1) $0.0042999 per GB hour
Seoul (asia-northeast3) $0.0042999 per GB hour
Mumbai (asia-south1) $0.0040376 per GB hour
Singapore (asia-southeast1) $0.0041458 per GB hour
Sydney (australia-southeast1) $0.004769 per GB hour

N1 Series

N1 Prediction machine type SKUs

vCPU

Location Price per hour
Tokyo (asia-northeast1) $0.0467107 per vCPU hour
Singapore (asia-southeast1) $0.04484885 per vCPU hour
Sydney (australia-southeast1) $0.0515844 per vCPU hour
Other Asia Pacific regions $0.0421268 per vCPU hour

RAM

Location Price per hour
Tokyo (asia-northeast1) $0.00623185 per GB hour
Singapore (asia-southeast1) $0.0060099 per GB hour
Sydney (australia-southeast1) $0.00691265 per GB hour
Other Asia Pacific regions $0.0056373 per GB hour

N2 Series

vCPU

Location Price per hour
Taiwan (asia_east1) $0.0420923 per vCPU hour
Hong Kong (asia_east2) $0.0508656 per vCPU hour
Tokyo (asia_northeast1) $0.0467107 per vCPU hour
Seoul (asia_northeast3) $0.0467107 per vCPU hour
Mumbai (asia_south1) $0.0436655 per vCPU hour
Singapore (asia_southeast1) $0.0448488 per vCPU hour
Jakarta (asia_southeast2) $0.0488853 per vCPU hour
Sydney (australia_southeast1) $0.0515844 per vCPU hour

RAM

Location Price per hour
Taiwan (asia_east1) $0.0056419 per GB hour
Hong Kong (asia_east2) $0.0068172 per GB hour
Tokyo (asia_northeast1) $0.0062318 per GB hour
Seoul (asia_northeast3) $0.0062318 per GB hour
Mumbai (asia_south1) $0.0058512 per GB hour
Singapore (asia_southeast1) $0.0060099 per GB hour
Jakarta (asia_southeast2) $0.0065504 per GB hour
Sydney (australia_southeast1) $0.0069126 per GB hour

N2D Series

vCPU

Location Price per hour
Taiwan (asia_east1) $0.0366206 per vCPU hour
Hong Kong (asia_east2) $0.0442531 per vCPU hour
Tokyo (asia_northeast1) $0.0406387 per vCPU hour
Mumbai (asia_south1) $0.0208725 per vCPU hour
Singapore (asia_southeast1) $0.0390184 per vCPU hour
Sydney (australia_southeast1) $0.0448787 per vCPU hour

RAM

Location Price per hour
Taiwan (asia_east1) $0.0049082 per GB hour
Hong Kong (asia_east2) $0.0059305 per GB hour
Tokyo (asia_northeast1) $0.0054222 per GB hour
Mumbai (asia_south1) $0.0027979 per GB hour
Singapore (asia_southeast1) $0.005229 per GB hour
Sydney (australia_southeast1) $0.0060145 per GB hour

C2 Series

vCPU

Location Price per hour
Taiwan (asia_east1) $0.045249 per vCPU hour
Hong Kong (asia_east2) $0.0546802 per vCPU hour
Tokyo (asia_northeast1) $0.0502136 per vCPU hour
Seoul (asia_northeast3) $0.0502136 per vCPU hour
Mumbai (asia_south1) $0.0469407 per vCPU hour
Singapore (asia_southeast1) $0.0482126 per vCPU hour
Sydney (australia_southeast1) $0.055453 per vCPU hour

RAM

Location Price per hour
Taiwan (asia_east1) $0.0060651 per GB hour
Hong Kong (asia_east2) $0.0073289 per GB hour
Tokyo (asia_northeast1) $0.0066987 per GB hour
Seoul (asia_northeast3) $0.0066987 per GB hour
Mumbai (asia_south1) $0.0062905 per GB hour
Singapore (asia_southeast1) $0.0064607 per GB hour
Sydney (australia_southeast1) $0.0074313 per GB hour

C2D Series

vCPU

Location Price per hour
Taiwan (asia_east1) $0.0393656 per vCPU hour
Singapore (asia_southeast1) $0.0419417 per vCPU hour

RAM

Location Price per hour
Taiwan (asia_east1) $0.0052716 per GB hour
Singapore (asia_southeast1) $0.0056166 per GB hour

C3 Series

vCPU

Location Price per hour
Singapore (asia_southeast1) $0.04821 per vCPU hour

RAM

Location Price per hour
Singapore (asia_southeast1) $0.00646 per GB hour

A2 Series

A2 Prediction machine type SKUs

vCPU

Location Price per hour
Tokyo (asia-northeast1) $0.0467107 per vCPU hour
Seoul (asia-northeast3) $0.0467107 per vCPU hour
Singapore (asia-southeast1) $0.0448488 per vCPU hour

RAM

Location Price per hour
Tokyo (asia-northeast1) $0.00623185 per GB hour
Seoul (asia-northeast3) $0.0062318 per GB hour
Singapore (asia-southeast1) $0.0060099 per GB hour

GPU

Location Price per hour
Tokyo (asia-northeast1) $3.5673 per GPU hour (A100 40GB)
Seoul (asia-northeast3) $3.5673 per GPU hour (A100 40GB)
Singapore (asia-southeast1) $3.5673 per GPU hour (A100 40GB)
Singapore (asia-southeast1) $5.57298 per GPU hour (A100 80GB)

SKU pricing for Middle East

N2 Series

vCPU

Location Price per hour
Tel Aviv (me_west1) $0.0399879 per vCPU hour

RAM

Location Price per hour
Tel Aviv (me_west1) $0.0053598 per GB hour

N2D Series

vCPU

Location Price per hour
Tel Aviv (me_west1) $0.03479 per vCPU hour

RAM

Location Price per hour
Tel Aviv (me_west1) $0.0046628 per GB hour

Some machine types allow you to add optional GPU accelerators for prediction. Optional GPUs incur an additional charge, separate from those described in the previous table. View each pricing table, which describes the pricing for each type of optional GPU.

The Americas

Accelerators - price per hour

NVIDIA_TESLA_P4
Iowa (us-central1) $0.6900
N. Virginia (us-east4) $0.6900
Montréal (northamerica-northeast1) $0.7475
NVIDIA_TESLA_P100
Oregon (us-west1) $1.6790
Iowa (us-central1) $1.6790
South Carolina (us-east1) $1.6790
NVIDIA_TESLA_T4
Oregon (us-west1) $0.4025
Iowa (us-central1) $0.4025
South Carolina (us-east1) $0.4025
NVIDIA_TESLA_V100
Oregon (us-west1) $2.8520
Iowa (us-central1) $2.8520

Europe

Accelerators - price per hour

NVIDIA_TESLA_P4
Netherlands (europe-west4) $0.7475
NVIDIA_TESLA_P100
Belgium (europe-west1) $1.8400
NVIDIA_TESLA_T4
London (europe-west2) $0.4715
Netherlands (europe-west4) $0.4370
NVIDIA_TESLA_V100
Netherlands (europe-west4) $2.9325

Asia Pacific

Accelerators - price per hour

NVIDIA_TESLA_P4
Singapore (asia-southeast1) $0.7475
Sydney (australia-southeast1) $0.7475
NVIDIA_TESLA_P100
Taiwan (asia-east1) $1.8400
NVIDIA_TESLA_T4
Tokyo (asia-northeast1) $0.4255
Singapore (asia-southeast1) $0.4255
Seoul (asia-northeast3) $0.4485
NVIDIA_TESLA_V100
Taiwan (asia-east1) $2.932

Pricing is per GPU. If you use multiple GPUs per prediction node (or if your version scales to use multiple nodes),the costs scale accordingly.

AI Platform Prediction serves predictions from your model by running a number of virtual machines ("nodes"). By default, Vertex AI automatically scales the number of nodes running at any time. For online prediction, the number of nodes scales to meet demand. Each node can respond to multiple prediction requests. For batch prediction, the number of nodes scales to reduce the total time it takes to run a job. You can customize how prediction nodes scale.

You are charged for the time that each node runs for your model, including:

  • When the node is processing a batch prediction job.
  • When the node is processing an online prediction request.
  • When the node is in a ready state for serving online predictions.

The cost of one node running for one hour is a node hour. The table of prediction prices describes the price of a node hour, which varies across regions and between online prediction and batch prediction.

You can consume node hours in fractional increments. For example, one node running for 30 minutes costs 0.5 node hours.

Cost calculations for Compute Engine (N1) machine types

  • The running time of a node is billed in 30-second increments. This means that every 30 seconds, your project is billed for 30 seconds worth of whatever vCPU, RAM, and GPU resources that your node is using at that moment.

More about automatic scaling of prediction nodes

Online prediction Batch prediction
The priority of the scaling is to reduce the latency of individual requests. The service keeps your model in a ready state for a few idle minutes after servicing a request. The priority of the scaling is to reduce the total elapsed time of the job.
Scaling affects your total charges each month: the more numerous and frequent your requests, the more nodes will be used. Scaling should have little effect on the price of your job, though there is some overhead involved in bringing up a new node.

You can choose to let the service scale in response to traffic (automatic scaling) or you can specify a number of nodes to run constantly to avoid latency (manual scaling).

  • If you choose automatic scaling, the number of nodes scales automatically. For AI Platform Prediction legacy (MLS1) machine type deployments, the number of nodes can scale down to zero for no-traffic durations. Vertex AI deployments and other types of AI Platform Prediction deployments cannot scale down to zero nodes.
  • If you choose manual scaling, you specify a number of nodes to keep running all the time. You are charged for all of the time that these nodes are running, starting at the time of deployment and persisting until you delete the model version.
You can affect scaling by setting a maximum number of nodes to use for a batch prediction job, and by setting the number of nodes to keep running for a model when you deploy it.

Batch prediction jobs are charged after job completion

Batch prediction jobs are charged after job completion, not incrementally during the job. Any Cloud Billing budget alerts that you have configured aren't triggered while a job is running. Before starting a large job, consider running some cost benchmark jobs with small input data first.

Example of a prediction calculation

A real-estate company in an Americas region runs a weekly prediction of housing values in areas it serves. In one month, it runs predictions for four weeks in batches of 3920, 4277, 3849, and 3961. Jobs are limited to one node and each instance takes an average of 0.72 seconds of processing.

First calculate the length of time that each job ran:

3920 instances * (0.72 seconds / 1 instance) * (1 minute / 60 seconds) = 47.04 minutes
4277 instances * (0.72 seconds / 1 instance) * (1 minute / 60 seconds) = 51.324 minutes
3849 instances * (0.72 seconds / 1 instance) * (1 minute / 60 seconds) = 46.188 minutes
3961 instances * (0.72 seconds / 1 instance) * (1 minute / 60 seconds) = 47.532 minutes

Each job ran for more than ten minutes, so it is charged for each minute of processing:

($0.0909886 / 1 node hour) * (1 hour / 60 minutes) * 48 minutes * 1 node = $0.0632964
($0.0909886 / 1 node hour) * (1 hour / 60 minutes) * 52 minutes * 1 node = $0.0685711
($0.0909886 / 1 node hour) * (1 hour / 60 minutes) * 47 minutes * 1 node = $0.061977725
($0.0909886 / 1 node hour) * (1 hour / 60 minutes) * 48 minutes * 1 node = $0.0632964

The total charge for the month is $0.26.

This example assumed jobs ran on a single node and took a consistent amount of time per input instance. In real usage, make sure to account for multiple nodes and use the actual amount of time each node spends running for your calculations.

Charges for Vertex Explainable AI

Feature-based explanations

Feature-based explanations come at no extra charge to prediction prices. However, explanations take longer to process than normal predictions, so heavy usage of Vertex Explainable AI along with auto-scaling could result in more nodes being started, which would increase prediction charges.

Example-based explanations

Pricing for example-based explanations consists of the following:

  • When you upload a model or update a model's dataset, you are billed:

    • per node hour for the batch prediction job that is used to generate the latent space representations of examples. This is billed at the same rate as prediction.
    • a cost for building or updating indexes. This cost is the same as the indexing costs for Vector Search, which is number of examples * number of dimensions * 4 bytes per float * $3.00 per GB. For example, if you have 1 million examples and 1,000 dimension latent space, the cost is $12 (1,000,000 * 1,000 * 4 * 3.00 / 1,000,000,000).
  • When you deploy to an endpoint, you are billed per node hour for each node in your endpoint. All compute associated with the endpoint is charged at same rate as prediction. However, because Example-based explanations require additional compute resources to serve the Vector Search index, this results in more nodes being started which increases prediction charges.

Vertex AI Neural Architecture Search

The following tables summarize the pricing in each region where Neural Architecture Search is available.

Prices

The following tables provide the price per hour of various configurations.

You can choose a predefined scale tier or a custom configuration of selected machine types. If you choose a custom configuration, sum the costs of the virtual machines you use.

Accelerator-enabled legacy machine types include the cost of the accelerators in their pricing. If you use Compute Engine machine types and attach accelerators, the cost of the accelerators is separate. To calculate this cost, multiply the prices in the following table of accelerators by the number of each type of accelerator you use.

Machine types

Americas

Europe

Asia Pacific

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Prices for a2-highgpu instances include the charges for the attached NVIDIA_TESLA_A100 Accelerators.

Accelerators

Americas

Europe

Asia Pacific

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Disks

Americas

Europe

Asia Pacific

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Notes:

  1. All use is subject to the Neural Architecture Search quota policy.
  2. You are required to store your data and program files in Cloud Storage buckets during the Neural Architecture Search lifecycle. See more about Cloud Storage usage.
  3. For volume-based discounts, contact the Sales team.
  4. The disk price is only charged when you configure the disk size of each VM to be larger than 100 GB. There is no charge for the first 100 GB (the default disk size) of disk for each VM. For example, if you configure each VM to have 105 GB of disk, then you are charged for 5 GB of disk for each VM.

Required use of Cloud Storage

In addition to the costs described in this document, you are required to store data and program files in Cloud Storage buckets during the Neural Architecture Search lifecycle. This storage is subject to the Cloud Storage pricing policy.

Required use of Cloud Storage includes:

  • Staging your training application package.

  • Storing your training input data.

  • Storing the output of your jobs. Neural Architecture Search doesn't require long-term storage of these items. You can remove the files as soon as the operation is complete.

Free operations for managing your resources

The resource management operations provided by Neural Architecture Search are available free of charge. The Neural Architecture Search quota policy does limit some of these operations.

Resource Free operations
jobs get, list, cancel
operations get, list, cancel, delete

Vertex AI Pipelines

Vertex AI Pipelines charges a run execution fee of $0.03 per Pipeline Run. You are not charged the execution fee during the Preview release. You also pay for Google Cloud resources you use with Vertex AI Pipelines, such as Compute Engine resources consumed by pipeline components (charged at the same rate as for Vertex AI training). Finally, you are responsible for the cost of any services (such as Dataflow) called by your pipeline.

Vertex AI Feature Store

Vertex AI Feature Store is Generally Available (GA) since November 2023. For information on the previous version of the product go to Vertex AI Feature Store (Legacy).

New Vertex AI Feature Store

The new Vertex AI Feature Store supports functionality across 2 types of operations:

  • Offline operations are operations to transfer, store, retrieve and transform data in the offline store (BigQuery)
  • Online operations are operations to transfer data into the online store(s) and operations on data while it is in the online store(s).

Offline Operations Pricing

Since BigQuery is used for offline operations, please refer to BigQuery pricing for functionality such as ingestion to the offline store, querying the offline store, and offline storage.

Online Operations Pricing

For online operations, Vertex AI Feature Store charges for any GA features to transfer data into the online store, serve data or store data. A node-hour represents the time a virtual machine spends to complete an operation, charged to the minute.

Optimized online serving and Bigtable online serving use different architectures, so their nodes are not comparable.
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

Online Operations Workload Estimates

Consider the following guidelines when estimating your workloads. The number of nodes required for a given workload may differ across each serving approach.

  • Data processing:
    • Ingestion - One node can ingest approximately a minimum of 100 MB of data per hour into a Bigtable Online Store or an Optimized Online Store if no analytical functions are used.
  • Bigtable online serving: Each node can support approximately 15,000 QPS and up to 5TB of storage.
  • Optimized online serving: Performance is based on the machine type and replicas, which are automatically configured to minimize costs subject to the workload. Each node can have a minimum of 2 and a maximum of 6 replicas for high availability and autoscaling. You're charged for the number of replicas accordingly. For more details, see the example monthly scenarios.
    • For non embeddings-related workloads, each node can support approximately 500 QPS and up to 200GB of storage.
    • For embeddings-related workloads, each node can support approximately 500 QPS and up to 4 GB of storage of 512 dimensional data.

You can view the number of nodes (with replicas) in the Metric Explorer:

Metric Explorer to figure out the number of nodes been used.
Metric Explorer to figure out the number of nodes been used.

Example Monthly Scenarios (assuming us-central1)

Data streaming workload - Bigtable online serving with 2.5TB of data (1GB refreshed daily) and 1200QPS

Operations Monthly Usage Monthly Cost
Data processing node (1 GB/day) * (30 days/month) * (1,000 MB/GB) * (1 node-hr / 100MB) = 300 node-hr 300 node-hr * ($0.08 per node-hr) = $24
Optimized online serving node N/A N/A
Bigtable online serving node (1 node) * (24 hr/day) * (30 days/month) = 720 node-hr 720 node-hr * ($0.94 per node-hr) = $677
Bigtable online serving storage (2.5 TB-month) * (1000 GB/TB) = 2500 GB-month 2500 GB-month * ($0.25 per GB-month) = $625
Total $1,326

High QPS workload - Optimized online serving with 10GB of non-embedding data (5GB refreshed daily) and 2000QPS

Operations Monthly Usage Monthly Cost
Data processing node (5 GB/day) * (30 days/month) * (1,000 MB/GB) * (1 node-hr / 100MB) = 1500 node-hr 1500 node-hr * ($0.08 per node-hr) = $120
Optimized online serving node Roundup(10GB * (1 node / 200 GB)) = 1 * max(2 default replicas, 2000 QPS * (1 replica / 500 QPS)) = 4 total nodes * (24 hr/day) * (30days/month) =2880 node-hr 2880 node-hr * (0.30 per node-hr) = $864
Bigtable online serving node N/A N/A
Bigtable online serving storage N/A N/A
Total $984

Embeddings serving workload - Optimized online serving with 20GB of embeddings data (2GB refreshed daily) and 800QPS

Operations Monthly Usage Monthly Cost
Data processing node (2 GB/day) * (30 days/month) * (1,000 MB/GB) * (1 node-hr / 100MB) = 600 node-hr 600 node-hr * ($0.08 per node-hr) = $48
Optimized online serving node Roundup(20GB* (1 node / 4GB) = 5 * max(2 default replicas, 800 QPS * (1 replica / 500 QPS)) = 10 total nodes * (24 hr/day) * (30days/month) = 7200 node-hr 7200 node-hr * (0.30 per node-hr) = $2160
Bigtable online serving node N/A N/A
Bigtable online serving storage N/A N/A
Total $2,208

Vertex AI Feature Store (Legacy)

Prices for Vertex AI Feature Store (Legacy) are based on the amount of feature data in online and offline storage as well as the availability of online serving. A node per hour represents the time a virtual machine spends serving feature data or waiting in a ready state to handle feature data requests.

If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

When you enable feature value monitoring, billing includes applicable charges above in addition to applicable charges that follow:

  • $3.50 per GB for all data analyzed. With snapshot analysis enabled, snapshots taken for data in Vertex AI Feature Store (Legacy) are included. With import feature analysis enabled, batches of ingested data are included.
  • Additional charges for other Vertex AI Feature Store (Legacy) operations used with feature value monitoring include the following:
    • The snapshot analysis feature periodically takes a snapshot of the feature values based on your configuration for the monitoring interval.
    • The charge for a snapshot export is the same as a regular batch export operation.

Snapshot Analysis Example

A data scientist enables feature value monitoring for their Vertex AI Feature Store (Legacy) and turns on monitoring for a daily snapshot analysis. A pipeline runs daily for the entity types monitoring. The pipeline scans 2GB of data in Vertex AI Feature Store (Legacy) and exports a snapshot containing 0.1GB of data. The total charge for one day's analysis is:

(0.1 GB * $3.50) + (2 GB * $0.005) = $0.36

Ingestion Analysis Example

A data scientist enables feature value monitoring for their Vertex AI Feature Store (Legacy) and turns on monitoring for ingestion operations. An ingestion operation imports 1GB of data into Vertex AI Feature Store (Legacy). The total charge for feature value monitoring is:

(1 GB * $3.50) = $3.50

Vertex ML Metadata

Metadata storage is measured in binary gigabytes (GiB), where 1 GiB is 1,073,741,824 bytes. This unit of measurement is also known as a gibibyte.

Vertex ML Metadata charges $10 per gibibyte (GiB) per month for metadata storage. Prices are pro-rated per megabyte (MB). For example, if you store 10 MB of metadata, you are charged $0.10 per month for that 10 MB of metadata.

Prices are the same in all regions where Vertex ML Metadata is supported.

Vertex AI TensorBoard

To use Vertex AI TensorBoard, request that the IAM administrator of the project assign you to the role "Vertex AI TensorBoard Web App User". The Vertex AI Administrator role also has access.

Beginning in August 2023, Vertex AI TensorBoard pricing changed from a per-user monthly license of $300/month to $10 GiB/month for data storage of logs and metrics. This means no more subscription fees. You will only pay for the storage you’ve used. See the Vertex AI TensorBoard: Delete Outdated TensorBoard Experiments tutorial for how to manage storage.

Vertex AI Vizier

Vertex AI Vizier is a black-box optimization service inside Vertex AI. The Vertex AI Vizier pricing model consists of the following:

  • There is no charge for trials that use RANDOM_SEARCH and GRID_SEARCH. Learn more about the search algorithms.
  • The first 100 Vertex AI Vizier trials per calendar month are available at no charge (trials using RANDOM_SEARCH and GRID_SEARCH do not count against this total).
  • After 100 Vertex AI Vizier trials, subsequent trials during the same calendar month are charged at $1 per trial (trials that use RANDOM_SEARCH or GRID_SEARCH incur no charges).

Vector Search

Pricing for Vector Search Approximate Nearest Neighbor service consists of:

  • Per node hour pricing for each VM used to host a deployed index.
  • A cost for building new indexes, updating existing indexes, and using streaming index updates.

Data processed during building and updating indexes is measured in binary gigabytes (GiB), where 1 GiB is 1,073,741,824 bytes. This unit of measurement is also known as a gibibyte.

Vector Search charges $3.00 per gibibyte (GiB) of data processed in all regions. Vector Search charges $0.45/GiB ingested for Streaming Update inserts.

The following tables summarize the pricing of an index serving in each region where the Vector Search is available. The price corresponds to the machine type, by region, and is charged per node hour.

The Americas

Region e2-standard-2 e2-standard-16 e2-highmem-16 n2d-standard-32 n1-standard-16 n1-standard-32
us_central1 0.094 0.75 1.012 1.893 1.064 2.128
us_east1 0.094 0.75 1.012 1.893 1.064 2.128
us_east4 0.10 0.845 1.14 2.132 1.198 2.397
us_west1 0.094 0.75 1.012 1.893 1.064 2.128
us_west2 0.113 0.901 1.216 2.273 1.279 2.558
us_west3 0.113 0.901 1.216 N/A 1.279 2.558
us_west4 0.106 0.845 1.14 2.132 1.198 2.397
us_south1 0.111 0.886 1.195 N/A N/A N/A
northamerica_northeast1 0.103 0.826 1.115 2.084 1.172 2.343
northamerica_northeast2 0.103 0.826 1.115 N/A N/A N/A
southamerica_east1 0.149 1.191 1.607 3.004 1.69 3.38

Europe

Region e2-standard-2 e2-standard-16 e2-highmem-16 n2d-standard-32 n1-standard-16 n1-standard-32
europe_central2 0.121 0.967 1.304 N/A N/A N/A
europe_north1 0.103 0.826 1.115 2.084 1.172 2.343
europe_west1 0.103 0.826 1.114 2.082 1.171 2.343
europe_west2 0.121 0.967 1.304 2.438 1.371 2.742
europe_west3 0.121 0.967 1.304 2.438 1.371 2.742
europe_west4 0.103 0.826 1.115 2.084 1.172 2.343
europe_west6 0.131 1.050 1.417 N/A 1.489 2.978
europe_west9 0.131 1.051 1.417 2.195 N/A N/A

Asia Pacific

Region e2-standard-2 e2-standard-16 e2-highmem-16 n2d-standard-32 n1-standard-16 n1-standard-32
asia_east1 0.109 0.869 1.172 2.191 1.232 2.464
asia_east2 0.131 1.050 1.417 2.648 1.489 2.978
asia_south1 0.113 0.901 1.216 1.249 1.278 2.556
asia_southeast1 0.116 0.926 1.249 2.335 1.313 2.625
asia_southeast2 0.126 1.009 1.361 N/A N/A N/A
asia_northeast1 0.12 0.963 1.298 2.428 1.366 2.733
asia_northeast2 0.12 0.963 1.298 2.428 1.366 2.733
asia_northeast3 0.12 0.963 1.298 N/A 1.367 2.733
australia_southeast1 0.133 1.065 1.436 2.686 1.51 3.02

Middle East

Region e2-standard-2 e2-standard-16 e2-highmem-16 n2d-standard-32 n1-standard-16 n1-standard-32
me_west1 0.103 0.826 1.114 2.082 N/A N/A

Vector Search pricing examples

Vector Search pricing is determined by the size of your data, the amount of queries per second (QPS) you want to run, and the number of nodes you use. To get your estimated serving cost, you need to calculate your total data size. Your data size is the number of your embeddings/vectors* the number of dimensions you have* 4 bytes per dimension. After you have the size of your data you can calculate the serving cost and the building cost. The serving cost plus the building cost equals your monthly total cost.

  • Serving cost: # replicas/shard * # shards (~data size/shard size) * hourly cost * 730 hours
  • Building cost: data size(in GiB) * $3/GiB * # of updates/month

Streaming update: Vector Search uses heuristics-based metrics to determine when to trigger compaction. If the oldest uncompacted data is five days old, compaction is always triggered. You are billed for the cost of rebuilding the index at the same rate of a batch update, in addition to the streaming update costs.

Number of embeddings/vectors Number of dimensions Queries per second (QPS) Machine Type Nodes Estimated monthly serving cost
2 million 128 100 e2-standard-2 1 $68
20 million 256 1,000 e2-standard-16 1 $547
20 million 256 3,000 e2-standard-16 3 $1,642
100 million 256 500 e2-highmem-16 2 $1,477
1 billion 100 500 e2-highmem-16 8 $5,910

All examples are based on machine types in us-central1. The cost you incur will vary with recall rate and latency requirements. The estimated monthly serving cost is directly related to the number of nodes used in the console. To learn more about configuration parameters that affect cost, see Configuration parameters which affect recall and latency.

If you have high queries per second (QPS), batching these queries can reduce total costs up to 30%-40%.

Vertex AI Model Registry

The Vertex AI Model Registry is a central repository which tracks and lists your models and model versions. You can import models into Vertex AI and they appear in the Vertex AI Model Registry. There is no cost associated with having your models in the Model Registry. Cost is only incurred when you deploy the model to an endpoint or perform a batch prediction on the model. This cost is determined by the type of model you are deploying.

To learn more about pricing for deploying custom models from the Vertex AI Model Registry, see Custom-trained models. To learn more about pricing for deploying AutoML models, see Pricing for AutoML models.

Vertex AI Model Monitoring

Vertex AI enables you to monitor the continued effectiveness of your model after you deploy it to production. For more information, see Introduction to Vertex AI Model Monitoring.

When you use Vertex AI Model Monitoring, you are billed for the following:

  • $3.50 per GB for all data analyzed, including the training data provided and prediction data logged in a BigQuery table.
  • Charges for other Google Cloud products that you use with Model Monitoring, such as BigQuery storage or Batch Explain when attribution monitoring is enabled.

Vertex AI Model Monitoring is supported in the following regions: us-central1, europe-west4, asia-east1, and asia-southeast1. Prices are the same for all regions.

Data sizes are measured after they are converted to TfRecord format.

Training datasets incur a one-time charge when you set up a Vertex AI Model Monitoring job.

Prediction Datasets consist of logs collected from the Online Prediction service. As prediction requests arrive during different time windows, the data for each time window is collected and the sum of the data analyzed for each prediction window is used to calculate the charge.

Example: A data scientist runs model monitoring on the prediction traffic belonging to their model.

  • The model is trained from a BigQuery dataset. The data size after converting to TfRecord is 1.5GB.
  • Prediction data logged between 1:00 - 2:00 p.m. is 0.1 GB, between 3:00 - 4:00 p.m. is 0.2 GB.
  • The total price for setting up the model monitoring job is:

    (1.5 GB * $3.50) + ((0.1 GB + 0.2 GB) * $3.50) = $6.30

Vertex AI Workbench

Select instances, managed notebooks, or user-managed notebooks for pricing information.

Instances


The tables below provide the approximate price per hour of various VM configurations. You can choose a custom configuration of selected machine types. To calculate pricing, sum the costs of the virtual machines you use.

If you use Compute Engine machine types and attach accelerators, the cost of the accelerators is separate. To calculate this cost, multiply the prices in the table of accelerators below by how many machine hours of each type of accelerator you use.

CPUs

Memory

Accelerators

Disks

Managed notebooks

User-managed notebooks

Additional Google Cloud resources

In addition to the costs mentioned previously, you also pay for any Google Cloud resources that you use. For example:

  • Data analysis services: You incur BigQuery costs when you issue SQL queries within a notebook (see BigQuery pricing).

  • Customer-managed encryption keys: You incur costs when you use customer-managed encryption keys. Each time your managed notebooks or user-managed notebooks instance uses a Cloud Key Management Service key, that operation is billed at the rate of Cloud KMS key operations (see Cloud Key Management Service pricing).

Deep Learning Containers, Deep Learning VM, and AI Platform Pipelines

For Deep Learning Containers, Deep Learning VM Images, and AI Platform Pipelines, pricing is calculated based on the compute and storage resources that you use. These resources are charged at the same rate you currently pay for Compute Engine and Cloud Storage.

In addition to the compute and storage costs, you also pay for any Google Cloud resources that you use. For example:

  • Data analysis services: You incur BigQuery costs when you issue SQL queries within a notebook (see BigQuery pricing).

  • Customer-managed encryption keys: You incur costs when you use customer-managed encryption keys. Each time your managed notebooks or user-managed notebooks instance uses a Cloud Key Management Service key, that operation is billed at the rate of Cloud KMS key operations (see Cloud Key Management Service pricing).

Data labeling

Vertex AI enables you to request human labeling for a collection of data that you plan to use to train a custom machine learning model. Prices for the service are computed based on the type of labeling task.

  • For regular labeling tasks, the prices are determined by the number of annotation units.
    • For an image classification task, units are determined the number of images and the number of human labelers. For example, an image with 3 human labelers counts for 1 * 3 = 3 units. The price for single-label and multi-label classification are the same.
    • For an image bounding box task, units are determined by the number of bounding boxes identified in the images and the number of human labelers. For example, if an image with 2 bounding boxes and 3 human labelers counts for 2 * 3 = 6 units. Images without bounding boxes will not be charged.
    • For an image segmentation/rotated box/polyline/polygon task, units are determined in the same way as a image bounding box task.
    • For a video classification task, units are determined by the video length (every 5 seconds is a price unit) and the number of human labelers. For example, a 25 seconds video with 3 human labelers counts for 25 / 5 * 3 = 15 units. The price for single-label and multi-label classification are the same.
    • For a video object tracking task, unit are determined by the number of objects identified in the video and the number of human labelers. For example, for a video with 2 objects and 3 human labelers, it counts for 2 * 3 = 6 units. Video without objects will not be charged.
    • For a video action recognition task, units are determined in the same way as a video object tracking task.
    • For a text classification task, units are determined by text length (every 50 words is a price unit) and the number of human labelers. For example, one piece of text with 100 words and 3 human labelers counts for 100 / 50 * 3 = 6 units. The price for single-label and multi-label classification is the same.
    • For a text sentiment task, units are determined in the same way as a text classification task.
    • For a text entity extraction task, units are determined by text length (every 50 words is a price unit), the number of entities identified, and the number of human labelers. For example, a piece of text with 100 words, 2 entities identified, and 3 human labelers counts for 100 / 50 * 2 * 3 = 12 units. Text without entities will not be charged.
  • For image/video/text classification and text sentiment tasks, human labelers may lose track of classes if the label set size is too large. As a result, we send at most 20 classes to the human labelers at a time. For example, if the label set size of a labeling task is 40, each data item will be sent for human review 40 / 20 = 2 times, and we will charge 2 times of the price (calculated above) accordingly.

  • For a labeling task that enables the custom labeler feature, each data item is counted as 1 custom labeler unit.

  • For an active learning labeling task for data items with annotations that are generated by models (without a human labeler's help), each data item is counted as 1 active learning unit.

  • For an active learning labeling task for data items with annotations that are generated by human labelers, each data item is counted as a regular labeling task as described above.

The table below provides the price per 1,000 units per human labeler, based on the unit listed for each objective. Tier 1 pricing applies to the first 50,000 units per month in each Google Cloud project; Tier 2 pricing applies to the next 950,000 units per month in the project, up to 1,000,000 units. Contact us for pricing above 1,000,000 units per month.

Data type Objective Unit Tier 1 Tier 2
Image Classification Image $35 $25
Bounding box Bounding box $63 $49
Segmentation Segment $870 $850
Rotated box Bounding box $86 $60
Polygon/polyline Polygon/Polyline $257 $180
Video Classification 5sec video $86 $60
Object tracking Bounding box $86 $60
Action recognition Event in 30sec video $214 $150
Text Classification 50 words $129 $90
Sentiment 50 words $200 $140
Entity extraction Entity $86 $60
Active Learning All Data item $80 $56
Custom Labeler All Data item $80 $56

Required use of Cloud Storage

In addition to the costs described in this document, you are required to store data and program files in Cloud Storage buckets during the Vertex AI lifecycle. This storage is subject to the Cloud Storage pricing policy.

Required use of Cloud Storage includes:

  • Staging your training application package for custom-trained models.

  • Storing your training input data.

  • Storing the output of your training jobs. Vertex AI does not require long-term storage of these items. You can remove the files as soon as the operation is complete.

Free operations for managing your resources

The resource management operations provided by AI Platform are available free of charge. The AI Platform quota policy does limit some of these operations.

Resource Free operations
models create, get, list, delete
versions create, get, list, delete, setDefault
jobs get, list, cancel
operations get, list, cancel, delete

Google Cloud costs

If you store images to be analyzed in Cloud Storage or use other Google Cloud resources in tandem with Vertex AI, then you will also be billed for the use of those services.

To view your current billing status in the Google Cloud console, including usage and your current bill, see the Billing page. For more details about managing your account, see the Cloud Billing Documentation or Billing and Payments Support.

What's next