Pricing

AI Platform offers scalable, flexible pricing options to fit your project and budget. AI Platform charges you for training your models and for getting predictions, but managing your machine learning resources in the cloud is free of charge.

Pricing overview

The following tables summarize the pricing for training and prediction in each region where AI Platform is available.

Training prices

The tables below provide the price per hour of various training configurations, as well as the number of training units2 used by each configuration. Training units measure the resource usage of your job; the price per hour of a machine configuration is the number of training units it uses multiplied by the region's cost of training.

You can choose a predefined scale tier or a custom configuration of selected machine types. If you choose a custom configuration, sum the costs of the virtual machines you use.

Accelerator-enabled AI Platform machine types include the cost of the accelerators in their pricing. If you use Compute Engine machine types and attach accelerators, the cost of the accelerators is separate. To calculate this cost, multiply the prices in the table of accelerators below by how many of each type of accelerator you use.

Americas

The cost of a training job in all available Americas regions is $0.49 per hour, per training unit.

Predefined scale tiers - price per hour (training units)
BASIC $0.1900 (0.3878)
STANDARD_1 $1.9880 (4.0571)
PREMIUM_1 $16.5536 (33.7829)
BASIC_GPU $0.8300 (1.6939)
BASIC_TPU $4.6900 (9.5714)
CUSTOM If you select CUSTOM as your scale tier, you have control over the number and type of virtual machines used for your training job. See the table of machine types.
AI Platform machine types - price per hour (training units)
standard $0.1900 (0.3878)
large_model $0.4736 (0.9665)
complex_model_s $0.2836 (0.5788)
complex_model_m $0.5672 (1.1576)
complex_model_l $1.1344 (2.3151)
standard_gpu $0.8300 (1.6939)
complex_model_m_gpu $2.5600 (5.2245)
complex_model_l_gpu $3.3200 (6.7755)
standard_p100 $1.8400 (3.7551)
complex_model_m_p100 $6.6000 (13.4694)
standard_v100 $2.8600 (5.8367)
large_model_v100 $2.9536 (6.0278)
complex_model_m_v100 $10.6800 (21.7959)
complex_model_l_v100 $21.3600 (43.5918)
cloud_tpu6 $4.5000 (9.1840) or N/A if accelerators are attached explicitly6
Compute Engine machine types (Beta) - price per hour (training units)
n1-standard-4 $0.1900 (0.3878)
n1-standard-8 $0.3800 (0.7755)
n1-standard-16 $0.7600 (1.5510)
n1-standard-32 $1.5200 (3.1020)
n1-standard-64 $3.0400 (6.2041)
n1-standard-96 $4.5600 (9.3061)
n1-highmem-2 $0.1184 (0.2416)
n1-highmem-4 $0.2368 (0.4833)
n1-highmem-8 $0.4736 (0.9665)
n1-highmem-16 $0.9472 (1.9331)
n1-highmem-32 $1.8944 (3.8661)
n1-highmem-64 $3.7888 (7.7322)
n1-highmem-96 $5.6832 (11.5984)
n1-highcpu-16 $0.5672 (1.1576)
n1-highcpu-32 $1.1344 (2.3151)
n1-highcpu-64 $2.2688 (4.6302)
n1-highcpu-96 $3.4020 (6.9429)
Accelerators - price per hour (training units)
NVIDIA_TESLA_K80 $0.4500 (0.9184)
NVIDIA_TESLA_P4 (Beta) $0.6000 (1.2245)
NVIDIA_TESLA_P100 $1.4600 (2.9796)
NVIDIA_TESLA_V100 $2.4800 (5.0612)
Eight TPU_V2 cores6 $4.5000 (9.1840)

Europe

The cost of a training job in all available Europe regions is $0.54 per hour, per training unit.

Predefined scale tiers - price per hour (training units)
BASIC $0.2200 (0.4074)
STANDARD_1 $2.3020 (4.2630)
PREMIUM_1 $19.1640 (35.4889)
BASIC_GPU $0.9300 (1.7222)
BASIC_TPU (Not available)
CUSTOM If you select CUSTOM as your scale tier, you have control over the number and type of virtual machines used for your training job. See the table of machine types.
AI Platform machine types - price per hour (training units)
standard $0.2200 (0.4074)
large_model $0.5480 (1.0148)
complex_model_s $0.3284 (0.6081)
complex_model_m $0.6568 (1.2163)
complex_model_l $1.3136 (2.4326)
standard_gpu $0.9300 (1.7222)
complex_model_m_gpu $2.8400 (5.2593)
complex_model_l_gpu $3.7200 (6.8889)
standard_p100 $2.0400 (3.7778)
complex_model_m_p100 $7.2800 (13.4815)
standard_v100 $2.9684 (5.4970)
large_model_v100 $3.0708 (5.6867)
complex_model_m_v100 $11.0368 (20.4385)
complex_model_l_v100 $22.0736 (40.8770)
cloud_tpu6 (Not available)
Compute Engine machine types (Beta) - price per hour (training units)
n1-standard-4 $0.2200 (0.4074)
n1-standard-8 $0.4400 (0.8148)
n1-standard-16 $0.8800 (1.6296)
n1-standard-32 $1.7600 (3.2593)
n1-standard-64 $3.5200 (6.5185)
n1-standard-96 $5.2800 (9.7778)
n1-highmem-2 $0.1370 (0.2537)
n1-highmem-4 $0.2740 (0.5074)
n1-highmem-8 $0.5480 (1.0148)
n1-highmem-16 $1.0960 (2.0296)
n1-highmem-32 $2.1920 (4.0593)
n1-highmem-64 $4.3840 (8.1185)
n1-highmem-96 $6.5760 (12.1778)
n1-highcpu-16 $0.6568 (1.2163)
n1-highcpu-32 $1.3136 (2.4326)
n1-highcpu-64 $2.6272 (4.8652)
n1-highcpu-96 $3.9408 (7.2978)
Accelerators - price per hour (training units)
NVIDIA_TESLA_K80 $0.4900 (0.9074)
NVIDIA_TESLA_P4 (Beta) $0.6500 (1.2037)
NVIDIA_TESLA_P100 $1.6000 (2.9630)
NVIDIA_TESLA_V100 $2.5500 (4.7222)
Eight TPU_V2 cores6 (Not available)

Asia Pacific

The cost of a training job in all available Asia Pacific regions is $0.54 per hour, per training unit.

Predefined scale tiers - price per hour (training units)
BASIC $0.2200 (0.4074)
STANDARD_1 $2.3020 (4.2630)
PREMIUM_1 $19.1640 (35.4889)
BASIC_GPU $0.9300 (1.7222)
BASIC_TPU (Not available)
CUSTOM If you select CUSTOM as your scale tier, you have control over the number and type of virtual machines used for your training job. See the table of machine types.
AI Platform machine types - price per hour (training units)
standard $0.2200 (0.4074)
large_model $0.5480 (1.0148)
complex_model_s $0.3284 (0.6081)
complex_model_m $0.6568 (1.2163)
complex_model_l $1.3136 (2.4326)
standard_gpu $0.9300 (1.7222)
complex_model_m_gpu $2.8400 (5.2593)
complex_model_l_gpu $3.7200 (6.8889)
standard_p100 $2.0400 (3.7778)
complex_model_m_p100 $7.2800 (13.4815)
standard_v100 $2.9684 (5.4970)
large_model_v100 $3.0708 (5.6867)
complex_model_m_v100 $11.0368 (20.4385)
complex_model_l_v100 $22.0736 (40.8770)
cloud_tpu6 (Not available)
Compute Engine machine types (Beta) - price per hour (training units)
n1-standard-4 $0.2200 (0.4074)
n1-standard-8 $0.4400 (0.8148)
n1-standard-16 $0.8800 (1.6296)
n1-standard-32 $1.7600 (3.2593)
n1-standard-64 $3.5200 (6.5185)
n1-standard-96 $5.2800 (9.7778)
n1-highmem-2 $0.1370 (0.2537)
n1-highmem-4 $0.2740 (0.5074)
n1-highmem-8 $0.5480 (1.0148)
n1-highmem-16 $1.0960 (2.0296)
n1-highmem-32 $2.1920 (4.0593)
n1-highmem-64 $4.3840 (8.1185)
n1-highmem-96 $6.5760 (12.1778)
n1-highcpu-16 $0.6568 (1.2163)
n1-highcpu-32 $1.3136 (2.4326)
n1-highcpu-64 $2.6272 (4.8652)
n1-highcpu-96 $3.9408 (7.2978)
Accelerators - price per hour (training units)
NVIDIA_TESLA_K80 $0.4900 (0.9074)
NVIDIA_TESLA_P4 (Beta) (Not available)
NVIDIA_TESLA_P100 $1.6000 (2.9630)
NVIDIA_TESLA_V100 $2.5500 (4.7222)
Eight TPU_V2 cores6 (Not available)

Prediction prices

This table provides the prices of batch prediction and online prediction per node hour. A node hour represents the time your prediction job spends running on a virtual machine. Read more about node hours.

Americas

Prediction - price per node hour
Batch prediction $0.0791
Online prediction
Machine types - price per node hour
mls1-c1-m2 (default)

$0.0401

mls1-c4-m2 (Beta)

$0.1349

Europe

Prediction - price per node hour
Batch prediction $0.0861
Online prediction
Machine types - price per node hour
mls1-c1-m2 (default)

$0.0441

mls1-c4-m2 (Beta)

$0.1484

Asia Pacific

Prediction - price per node hour
Batch prediction $0.0861
Online prediction
Machine types - price per node hour
mls1-c1-m2 (default)

$0.0515

mls1-c4-m2 (Beta)

$0.1733

Notes:

  1. All use is subject to the AI Platform quota policy.
  2. Note the difference between training unit used on this page, and Consumed ML units shown on your Job Details page. The duration is already factored into the Consumed ML units. See the details below.
  3. You are required to store your data and program files in Google Cloud Storage buckets during the AI Platform lifecycle. See more about Cloud Storage usage.
  4. For volume-based discounts, contact the Sales team.
  5. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
  6. The cloud_tpu machine type currently provides a TPU v2 device with 8 cores, regardless of whether you explicitly attach accelerators to your configuration. The price is the same either way.

The pricing calculator

Use the pricing calculator to estimate your training and prediction costs.

More about training costs

You are charged for training your models in the cloud:

  • In one-minute increments.
  • At a price per hour as shown in the above table, where the price per hour is calculated from a base price and a number of training units, determined by the processing configuration you choose when you start your training job.
  • With a minimum of 10 minutes per training job.
  • From the moment when resources are provisioned for a job until the job finishes.

Scale tiers for predefined configurations

You can control the type of processing cluster to use when training your model. The simplest way is to choose from one of the predefined configurations called scale tiers. Read more about scale tiers.

Machine types for custom configurations

If you select CUSTOM as your scale tier, you have control over the number and type of virtual machines to use for the cluster's master, worker and parameter servers. Read more about machine types.

The cost of training with a custom processing cluster is the sum of all the machines you specify. You are charged for the total time of the job, not for the active processing time of individual machines.

Examples: Calculate training cost using training units

Use training units to calculate the cost of your training job, with the following formula:

(training units * base price / 60) * job duration in minutes

Examples:

  • A data scientist in an Americas region runs a training job and selects the STANDARD_1 scale tier, which uses 4.0571 training units. Their job takes 15 minutes:

    (4.0571 training units * $0.49 per hour / 60) * 15 minutes
    

    For a total of $0.50 for the job.

  • A computer science professor in an Americas region runs a training job using the CUSTOM scale tier. They have a very large model, so they want to take advantage of the large model VMs for their parameter servers. They configure their processing cluster like this:

    • A complex_model_s machine for their master (0.5788 training units.)
    • 5 parameter servers on large_model VMs (5 @ 0.9665 = 4.8325 training units).
    • 8 workers on complex_model_s VMs (8 @ 0.5788 = 4.6304 training units).

    Their job runs for 2 hours and 26 minutes:

    (10.0417 training units * $0.49 per hour / 60) * 146 minutes
    

    For a total of $11.97 for the job.

Examples: Calculate training cost using price per hour

Instead of training units, you can use the price per hour shown in the above table. The formula is as follows:

(Price per hour / 60) * job duration in minutes

Examples:

  • A data scientist in an Americas region runs a training job and selects the STANDARD_1 scale tier. Their job takes 15 minutes:

    ($1.9880 per hour / 60) * 15 minutes
    

    For a total of $0.50 for the job.

  • A computer science professor in an Americas region runs a training job using the CUSTOM scale tier. They have a very large model, so they want to take advantage of the large model VMs for their parameter servers. They configure their processing cluster like this:

    • A complex_model_s machine for their master ($0.2836).
    • 5 parameter servers on large_model VMs (5 @ $0.4736 = $2.3680).
    • 8 workers on complex_model_s VMs (8 @ $0.2836 = $2.2688).

    Their job runs for 2 hours and 26 minutes:

    (($0.2836 + $2.368 + $2.2688) per hour / 60) * 146 minutes
    

    For a total of $11.97 for the job.

Examples: Calculate training cost using "Consumed ML units"

The Consumed ML units (Consumed Machine Learning units) shown on your Job details page are equivalent to training units with the duration of the job factored in. When using Consumed ML units in your calculations, use the following formula:

Consumed ML units * $0.49

Example:

  • A data scientist in an Americas region runs a training job. The field Consumed ML units on their Job details page shows 55.75. The calculation is as follows:

    55.75 consumed ML units * $0.49
    

    For a total of $27.32 for the job.

To find your Job details page, go to the Jobs list and click the link for a specific job.

More about prediction costs

Prediction pricing applies to requests made to trained model versions hosted by AI Platform.

You are charged:

  • For the time used on each node in the processing cluster that performs the predictions.
  • In one-minute increments.
  • Based on a price per node hour as shown in the above table.
  • With a minimum of 10 minutes per prediction job.

Node hours

The online processing resources that AI Platform uses to run your model for prediction are called nodes. You can think of a node as a virtual machine. AI Platform scales the number of nodes it uses to accommodate the work for both online and batch prediction.

You are charged for the time that your model is running on a node, including:

  • When processing a batch prediction job.
  • When processing an online prediction request.
  • When your model is in a ready state for serving online predictions.

For batch prediction:

  • The priority of the scaling is to reduce the total elapsed time of the job.
  • Scaling should have little effect on the price of your job, though there is some overhead involved in bringing up a new node.
  • You can affect scaling by setting a maximum number of nodes to use for a batch prediction job, and by setting the number of nodes to keep running for a model when you deploy it.

For online prediction:

  • The priority of the scaling is to reduce the latency of individual requests.
  • The service keeps your model in a ready state for a few idle minutes after servicing a request.
  • Scaling affects your total charges each month: the more numerous and frequent your requests, the more nodes will be used.
  • You can choose to let the service scale in response to traffic (automatic scaling) or you can specify a number of nodes to run constantly to avoid latency.
  • If you choose automatic scaling, the number of nodes scales automatically, and can scale down to zero for no-traffic durations.
  • If you choose to specify a number of nodes rather than automatic scaling, you are charged for all of the time that the nodes are running, starting at the time of deployment and persisting until you delete the model version.

Note that online prediction uses single core machines with no GPUs or other accelerators.

You can learn more about node allocation and scaling.

Examples of prediction calculations

Use the formula below to calculate your prediction cost for a month:

(Price per hour / 60) * job duration in node minutes

Example:

  • A real-estate company in an Americas region runs a weekly prediction of housing values in areas they serve. In one month they run predictions for four weeks in batches of 3920, 4277, 3849, and 3961. Predictions take an average of 0.72 node seconds of processing.

    The cost for processing is per job (this example uses the average, but real costs use exact values for each job):

    3920 * 0.72 = 47.04 minutes
    4277 * 0.72 = 51.324 minutes
    3849 * 0.72 = 46.188 minutes
    3961 * 0.72 = 47.532 minutes
    

    Each job is greater than ten minutes, and so is charged per minute of processing:

    ($0.0791 / 60) * 48 = $0.06328
    ($0.0791 / 60) * 52 = $0.06855
    ($0.0791 / 60) * 47 = $0.06196
    ($0.0791 / 60) * 48 = $0.06328
    

    For a total charge for the month of $0.26.

The number of minutes shown in the examples does not correspond to actual elapsed time. Both batch and online prediction use one or more machines to process the data. Therefore, the actual elapsed time is typically shorter than the time expressed in node hours or minutes.

Required use of Google Cloud Storage

In addition to the costs described in this document, you are required to store data and program files in Google Cloud Storage buckets during the AI Platform lifecycle. This storage is subject to the Cloud Storage pricing policy.

Required use of Cloud Storage includes:

  • Staging your training application package.

  • Storing your training input data.

  • Staging your model files when you are ready to deploy a model version.

  • Storing your input data for batch prediction.

  • Storing the output of your batch prediction jobs. AI Platform does not require long-term storage of these items. You can remove the files as soon as the operation is complete.

  • Storing the output of your training jobs. AI Platform does not require long-term storage of these items. You can remove the files as soon as the operation is complete.

Free operations for managing your resources

The resource management operations provided by AI Platform are available free of charge. The AI Platform quota policy does limit some of these operations.

Resource Free operations
models create, get, list, delete
versions create, get, list, delete, setDefault
jobs get, list, cancel
operations get, list, cancel, delete

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

AI Platform