Archived release notes

You can see the latest product updates for all of Google Cloud on the Google Cloud page, browse and filter all release notes in the Google Cloud console, or programmatically access release notes in BigQuery.

On April 10, 2019, Cloud Machine Learning Engine became AI Platform Training and AI Platform Prediction. This page documents historical updates to Cloud ML Engine.

See the current release notes:

April 1, 2019

Cloud ML Engine now offers reduced pricing for training, online prediction and batch prediction.

Learn more about Cloud ML Engine pricing.

March 28, 2019

Cloud ML Engine now offers training with built-in algorithms. You can submit your data for automatic preprocessing, and train a model on the TensorFlow linear learner, TensorFlow wide and deep, and XGBoost algorithms without writing any code.

Learn more about training with built-in algorithms.

March 25, 2019

Cloud ML Engine runtime version 1.13 now supports TensorFlow 1.13.1. View the runtime version list for the full list of packages included in runtime version 1.13.

March 8, 2019

Support for training with TPUs in Cloud ML Engine runtime version 1.9 ended on March 8, 2019. See the currently supported versions in the runtime version list.

March 6, 2019

Cloud ML Engine runtime version 1.13 is now available for training and prediction. This version supports TensorFlow 1.13 and includes other packages as listed in the runtime version list.

Training with TPUs is not supported in runtime version 1.13 at this time.

March 1, 2019

Notebooks is now available in beta. Notebooks enables you to create and manage virtual machine (VM) instances that are pre-packaged with JupyterLab and a suite of deep learning software.

Visit the introduction to user-managed notebooks and the guide to creating a new instance to learn more.

February 13, 2019

Cloud TPU is now generally available for training TensorFlow models. Tensor Processing Units (TPUs) are Google's custom-developed accelerators for machine-learning workloads.

See how to use TPUs to train your models on Cloud ML Engine, and read more about their pricing.

February 7, 2019

Training with custom containers is now available in Beta. This feature allows you to run your training application on Cloud ML Engine using a custom Docker image. You can build your custom container with the ML frameworks of your choice. Get started with training a PyTorch model by using custom containers.

You can now configure training jobs with certain Compute Engine machine types. This provides additional flexibility for allocating computing resources to your training jobs. This feature is available in Beta.

When you configure your job with Compute Engine machine types, you may attach a custom set of GPUs.

Read more about Compute Engine machine types, GPU attachments, and their pricing.

P4 GPUs are now in Beta for training. For more information, see the guides to using GPUs, their regional availability, and their pricing.

February 1, 2019

Quad core CPUs are now available in Beta for online prediction. The names of the machine types are changed, and pricing is updated.

  • Set machineType on projects.models.versions.create to specify the machine type to use for serving. Use mls1-c4-m2 for quad core CPUs. The default is the single core CPU, mls1-c1-m2.
  • The following machine names used in Alpha are deprecated: mls1-highmem-1 and mls1-highcpu-4.
  • For more information, see the guide to online prediction.
  • See the updated pricing for serving machine types.

January 25, 2019

Online prediction is now available in the us-east4 region. See the guide to region availability.

January 10, 2019

V100 GPUs are now generally available for training. For more information, see the guides to using GPUs and pricing.

December 19, 2018

The Cloud ML Engine runtime versions 1.11 and 1.12 are now available for training and prediction. These versions support TensorFlow 1.11 and 1.12 respectively, and other packages as listed in the runtime version list.

TPU training support has been added for Cloud ML Engine runtime versions 1.11 and 1.12. Version 1.10 is not supported. See the currently supported versions in the runtime version list.

Each Cloud ML Engine runtime version now includes joblib. The earliest runtime version that includes joblib is version 1.4.

October 26, 2018

TPU training support for Cloud ML runtime version 1.8 ended on Oct 26, 2018. See the currently supported versions in the runtime version list.

October 11, 2018

The Cloud ML Engine runtime version 1.11 is rolled back due to errors caused by a CuDNN version mismatch during GPU training. The current workaround is to use runtime version 1.10. For more details, see the runtime version list.

October 5, 2018

The Cloud ML Engine runtime version 1.11 is now available for training and prediction. This version supports TensorFlow 1.11 and other packages as listed in the runtime version list.

August 31, 2018

The Cloud ML Engine runtime version 1.10 is now available for training and prediction. This version supports TensorFlow 1.10 and other packages as listed in the runtime version list.

August 27, 2018

V100 GPUs are now in Beta for training. Using V100 GPUs now incurs charges. For more information, see the guides to using GPUs and pricing.

P100 GPUs are now generally available for training. For more information, see the guides to using GPUs and pricing.

Two new regions: us-west1, europe-west4 are now available for training. See regions page for info.

August 24, 2018

TPU training support for Cloud ML runtime version 1.7 ended on Aug 24, 2018. See the currently supported versions in the runtime version list.

August 9th, 2018

We're delighted to announce significant price reductions for online prediction with AI Platform Training.

The following table shows the previous pricing and the new pricing:

Region Previous price per node per hour New price per node per hour
US $0.30 USD $0.056 USD
Europe $0.348 USD $0.061 USD
Asia Pacific $0.348 USD $0.071 USD

See the pricing guide for details.

August 8th, 2018

We're delighted to announce promotional pricing for Cloud TPU with AI Platform Training, resulting in significant price reductions.

The following table shows the previous pricing and the new pricing:

Region: US Previous price per TPU per hour New price per TPU per hour
Scale tier: BASIC_TPU (Beta) $9.7674 USD $6.8474 USD
Custom machine type: cloud_tpu (Beta) $9.4900 USD $6.5700 USD

Note that the table shows pricing in the US region only. There is no change in Cloud TPU availability on Cloud ML Engine. See the pricing guide for details.

August 6, 2018

The Cloud ML Engine runtime version 1.9 is now available for training and prediction. This version supports TensorFlow 1.9 and other packages as listed in the runtime version list.

July 23, 2018

Cloud ML Engine now supports scikit-learn and XGBoost for training. This feature is generally available. See the guide to training with scikit-learn and XGBoost on Cloud ML Engine.

Online prediction support for scikit-learn and XGBoost is now generally available.

July 12, 2018

You can add labels to your AI Platform Training resources—jobs, models, and model versions—then use those labels to organize the resources into categories. Labels are also available on operations—in this case the labels are derived from the resource to which the operation applies. Read more about adding and using labels.

June 26, 2018

The following additional regions are now fully available:

  • us-east1
  • asia-northeast1

See more details about region availability.

June 13, 2018

TPU training support for Cloud ML runtime version 1.6 ended on June 13, 2018. See the currently supported versions in the Runtime Version List.

May 29, 2018

You can now use Cloud TPU (Beta) with TensorFlow 1.8 and Cloud ML Engine runtime version 1.8.

Background information: Cloud TPU became available in Cloud ML Engine on May 14th in runtime versions 1.6 and 1.7. Last week saw the release of runtime version 1.8, but at that time Cloud TPU was not yet available with TensorFlow 1.8. Now it is. See how to use TPUs to train your models on Cloud ML Engine.

May 16, 2018

The Cloud ML Engine runtime version 1.8 is now available for training and prediction. This version supports TensorFlow 1.8 and other packages as listed in the runtime version list.

May 15, 2018

You can now update the minimum number of nodes for autoscaling on an existing model version, as well as specifying the attribute when creating a new version.

May 14, 2018

Cloud ML Engine now offers Cloud TPU (Beta) for training TensorFlow models. Tensor Processing Units (TPUs) are Google's custom-developed ASICs, used to accelerate machine-learning workloads. See how to use TPUs to train your models on Cloud ML Engine.

April 26, 2018

The Cloud ML Engine runtime version 1.7 is now available for training and prediction. This version supports TensorFlow 1.7 and other packages as listed in the runtime version list.

April 16, 2018

Hyperparameter algorithms: When tuning the hyperparameters in your training job, you can now specify a search algorithm in the HyperparameterSpec. Available values are:

  • GRID_SEARCH: A simple grid search within the feasible space. This option is particularly useful if you want to specify a number of trials that is more than the number of points in the feasible space. In such cases, if you do not specify a grid search, the Cloud ML Engine default algorithm may generate duplicate suggestions. To use grid search, all parameters must be of type INTEGER, CATEGORICAL, or DISCRETE.
  • RANDOM_SEARCH: A simple random search within the feasible space.

If you do not specify an algorithm, your job uses the default Cloud ML Engine algorithm, which drives the parameter search to arrive at the optimal solution with a more effective search over the parameter space. For more about hyperparameter tuning, see the hyperparameter tuning overview.

April 5, 2018

Cloud ML Engine now supports scikit-learn and XGBoost for online prediction. This feature is in Beta.

  • Set framework on projects.models.versions.create to specify your machine learning framework when creating a model version. Valid values are TENSORFLOW, SCIKIT_LEARN, and XGBOOST. The default is TENSORFLOW. If you specify SCIKIT_LEARN or XGBOOST, you must also set runtimeVersion to 1.4 or greater on the model version.
  • See the guide to scikit-learn and XGBoost on Cloud ML Engine.

Python 3.5 is available for online prediction.

March 20, 2018

The Cloud ML Engine runtime version 1.6 is now available for training and prediction. This version supports TensorFlow 1.6 and other packages as listed in the runtime version list.

March 13, 2018

The Cloud ML Engine runtime version for TensorFlow 1.5 is now available for training and prediction. For more information, see the Runtime Version List.

February 8, 2018

Added new features for hyperparameter tuning: automated early stopping of trials, resuming a previous hyperparameter tuning job, and additional efficiency optimizations when you run similar jobs. For more information, See the hyperparameter tuning overview.

December 14, 2017

The Cloud ML Engine runtime version for TensorFlow 1.4 is now available for training and prediction. For more information, see the Runtime Version List.

Python 3 is now available for training as part of the Cloud ML Engine runtime version for TensorFlow 1.4. For more information, see the Runtime Version List.

Online prediction is now generally available for single core serving. See the guide to online prediction and the blog post.

Pricing has been reduced and simplified for both training and prediction. See the pricing details, the blog post, and the comparison of old and current prices in the pricing FAQ.

P100 GPUs are now in Beta. Using P100 GPUs now incurs charges. For more information, see Using GPUs and Pricing.

October 26, 2017

Audit logging for Cloud ML Engine is now in Beta. For more information, see Viewing Audit Logs.

September 25, 2017

Predefined IAM roles for Cloud ML Engine are available for general use. For more information, see Access Control.

June 27, 2017

The Cloud ML Engine runtime version for TensorFlow 1.2 is now available for training and prediction. For more information, see the Runtime Version List.

The older runtime versions with TensorFlow 0.11 and 0.12 are no longer supported on Cloud ML Engine. For more information, see the Runtime Version List and the support timelines for older runtime versions.

May 9, 2017

Announced general availability of GPU-enabled machines. For more information, see Using GPUs for Training Models in the Cloud.

April 27, 2017

GPUs are now available in the us-central1 region. For the full list of regions that support GPUs, see Using GPUs for Training Models in the Cloud.

v1 (March 8th, 2017)

Announced general availability of AI Platform Training. Version 1 of Cloud ML Engine is available for general use for training models, deploying models, and generating batch predictions. The hyperparameter tuning feature is also available for general use, but online prediction and GPU-enabled machines remain in beta.

Online prediction is now in the Beta launch stage. Its use is now subject to the Cloud ML Engine pricing policy, and follows the same pricing formula as batch prediction. While it remains in Beta, online prediction is not intended for use in critical applications.

The environments that Cloud ML Engine uses to train models and get predictions have been defined as Cloud ML Engine runtime versions. You can specify a supported runtime version to use when training, defining a model resource, or requesting batch predictions. The primary difference in runtime versions at this time is the version of TensorFlow supported by each, but more differences may arise over time. You can find the details in the runtime version list.

You can now run batch prediction jobs against TensorFlow SavedModels that are stored in Google Cloud Storage, not hosted as a model version in Cloud ML Engine. Instead of supplying a model or version ID when you create your job, you can use the URI of your SavedModel.

The Google Cloud Machine Learning SDK, formerly released as Alpha, is deprecated, and will no longer be supported effective May 7, 2017. Most of the functionality exposed by the SDK has moved to the new TensorFlow package, tf.Transform. You can use whatever technology or tool you like to preprocess your input data. However, we recommend tf.Transform as well as services that are available on Google Cloud Platform, including Google Cloud Dataflow, Google Cloud Dataproc, and Google BigQuery.

v1beta1 (September 29th, 2016)

Online prediction is an Alpha feature. Though AI Platform Training overall is in its Beta phase, online prediction is still undergoing significant changes to improve performance. You will not be charged for online prediction while it remains in Alpha.

Preprocessing and the rest of the Cloud ML Engine SDK are Alpha features. The SDK is undergoing active development to better integrate Cloud ML Engine with Apache Beam.