Release Notes

This page documents production updates to Cloud ML Engine. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.

Subscribe to the Cloud ML Engine release notes. Subscribe

October 26, 2018

TPU training support for Cloud ML runtime version 1.8 ended on Oct 26, 2018. See the currently supported versions in the runtime version list.

October 11, 2018

The Cloud ML Engine runtime version 1.11 is rolled back due to errors caused by a CuDNN version mismatch during GPU training. The current workaround is to use runtime version 1.10. For more details, see the runtime version list.

October 5, 2018

The Cloud ML Engine runtime version 1.11 is now available for training and prediction. This version supports TensorFlow 1.11 and other packages as listed in the runtime version list.

August 31, 2018

The Cloud ML Engine runtime version 1.10 is now available for training and prediction. This version supports TensorFlow 1.10 and other packages as listed in the runtime version list.

August 27, 2018

V100 GPUs are now in Beta for training. Using V100 GPUs now incurs charges. For more information, see the guides to using GPUs and pricing.

P100 GPUs are now generally available for training. For more information, see the guides to using GPUs and pricing.

Two new regions: us-west1, europe-west4 are now available for training. See regions page for info.

August 24, 2018

TPU training support for Cloud ML runtime version 1.7 ended on Aug 24, 2018. See the currently supported versions in the runtime version list.

August 9th, 2018

We're delighted to announce significant price reductions for online prediction with Cloud Machine Learning Engine.

The following table shows the previous pricing and the new pricing:

Region Previous price per node per hour New price per node per hour
US $0.30 USD $0.056 USD
Europe $0.348 USD $0.061 USD
Asia Pacific $0.348 USD $0.071 USD

See the pricing guide for details.

August 8th, 2018

We're delighted to announce promotional pricing for Cloud TPU with Cloud Machine Learning Engine, resulting in significant price reductions.

The following table shows the previous pricing and the new pricing:

Region: US Previous price per TPU per hour New price per TPU per hour
Scale tier: BASIC_TPU (Beta) $9.7674 USD $6.8474 USD
Custom machine type: cloud_tpu (Beta) $9.4900 USD $6.5700 USD

Note that the table shows pricing in the US region only. There is no change in Cloud TPU availability on Cloud ML Engine. See the pricing guide for details.

August 6, 2018

The Cloud ML Engine runtime version 1.9 is now available for training and prediction. This version supports TensorFlow 1.9 and other packages as listed in the runtime version list.

July 23, 2018

Cloud ML Engine now supports scikit-learn and XGBoost for training. This feature is generally available. See the guide to training with scikit-learn and XGBoost on Cloud ML Engine.

Online prediction support for scikit-learn and XGBoost is now generally available.

Quad core CPUs are now available in Beta for online prediction. The names of the machine types are changed, and pricing is updated.

  • Set machineType on projects.models.versions.create to specify the machine type to use for serving. Use mls1-c4-m2 for quad core CPUs. The default is the single core CPU, mls1-c1-m2.
  • The following machine names used in Alpha are deprecated: mls1-highmem-1 and mls1-highcpu-4.
  • For more information, see the guide to online prediction.
  • See the updated pricing for serving machine types.

July 12, 2018

You can add labels to your Cloud Machine Learning Engine resources—jobs, models, and model versions—then use those labels to organize the resources into categories. Labels are also available on operations—in this case the labels are derived from the resource to which the operation applies. Read more about adding and using labels.

June 26, 2018

The following additional regions are now fully available:

  • us-east1
  • asia-northeast1

See more details about region availability.

June 13, 2018

TPU training support for Cloud ML runtime version 1.6 ended on June 13, 2018. See the currently supported versions in the Runtime Version List.

May 29, 2018

You can now use Cloud TPU (Beta) with TensorFlow 1.8 and Cloud ML Engine runtime version 1.8.

Background information: Cloud TPU became available in Cloud ML Engine on May 14th in runtime versions 1.6 and 1.7. Last week saw the release of runtime version 1.8, but at that time Cloud TPU was not yet available with TensorFlow 1.8. Now it is. See how to use TPUs to train your models on Cloud ML Engine.

May 16, 2018

The Cloud ML Engine runtime version 1.8 is now available for training and prediction. This version supports TensorFlow 1.8 and other packages as listed in the runtime version list.

May 15, 2018

You can now update the minimum number of nodes for autoscaling on an existing model version, as well as specifying the attribute when creating a new version.

May 14, 2018

Cloud ML Engine now offers Cloud TPU (Beta) for training TensorFlow models. Tensor Processing Units (TPUs) are Google’s custom-developed ASICs, used to accelerate machine-learning workloads. See how to use TPUs to train your models on Cloud ML Engine.

April 26, 2018

The Cloud ML Engine runtime version 1.7 is now available for training and prediction. This version supports TensorFlow 1.7 and other packages as listed in the runtime version list.

April 16, 2018

Hyperparameter algorithms: When tuning the hyperparameters in your training job, you can now specify a search algorithm in the HyperparameterSpec. Available values are:

  • GRID_SEARCH: A simple grid search within the feasible space. This option is particularly useful if you want to specify a number of trials that is more than the number of points in the feasible space. In such cases, if you do not specify a grid search, the Cloud ML Engine default algorithm may generate duplicate suggestions. To use grid search, all parameters must be of type INTEGER, CATEGORICAL, or DISCRETE.
  • RANDOM_SEARCH: A simple random search within the feasible space.

If you do not specify an algorithm, your job uses the default Cloud ML Engine algorithm, which drives the parameter search to arrive at the optimal solution with a more effective search over the parameter space. For more about hyperparameter tuning, see the hyperparameter tuning overview.

April 5, 2018

Cloud ML Engine now supports scikit-learn and XGBoost for online prediction. This feature is in Beta.

  • Set framework on projects.models.versions.create to specify your machine learning framework when creating a model version. Valid values are TENSORFLOW, SCIKIT_LEARN, and XGBOOST. The default is TENSORFLOW. If you specify SCIKIT_LEARN or XGBOOST, you must also set runtimeVersion to 1.4 or greater on the model version.
  • See the guide to scikit-learn and XGBoost on Cloud ML Engine.

Python 3.5 is available for online prediction.

March 20, 2018

The Cloud ML Engine runtime version 1.6 is now available for training and prediction. This version supports TensorFlow 1.6 and other packages as listed in the runtime version list.

March 13, 2018

The Cloud ML Engine runtime version for TensorFlow 1.5 is now available for training and prediction. For more information, see the Runtime Version List.

February 8, 2018

Added new features for hyperparameter tuning: automated early stopping of trials, resuming a previous hyperparameter tuning job, and additional efficiency optimizations when you run similar jobs. For more information, See the hyperparameter tuning overview.

December 14, 2017

The Cloud ML Engine runtime version for TensorFlow 1.4 is now available for training and prediction. For more information, see the Runtime Version List.

Python 3 is now available for training as part of the Cloud ML Engine runtime version for TensorFlow 1.4. For more information, see the Runtime Version List.

Online prediction is now generally available for single core serving. See the guide to online prediction and the blog post.

Pricing has been reduced and simplified for both training and prediction. See the pricing details, the blog post, and the comparison of old and current prices in the pricing FAQ.

P100 GPUs are now in Beta. Using P100 GPUs now incurs charges. For more information, see Using GPUs and Pricing.

October 26, 2017

Audit logging for Cloud ML Engine is now in Beta. For more information, see Viewing Audit Logs.

September 25, 2017

Predefined IAM roles for Cloud ML Engine are available for general use. For more information, see Access Control.

June 27, 2017

The Cloud ML Engine runtime version for TensorFlow 1.2 is now available for training and prediction. For more information, see the Runtime Version List.

The older runtime versions with TensorFlow 0.11 and 0.12 are no longer supported on Cloud ML Engine. For more information, see the Runtime Version List and the support timelines for older runtime versions.

May 9, 2017

Announced general availability of GPU-enabled machines. For more information, see Using GPUs for Training Models in the Cloud.

April 27, 2017

GPUs are now available in the us-central1 region. For the full list of regions that support GPUs, see Using GPUs for Training Models in the Cloud.

v1 (March 8th, 2017)

Announced general availability of Cloud Machine Learning Engine. Version 1 of Cloud ML Engine is available for general use for training models, deploying models, and generating batch predictions. The hyperparameter tuning feature is also available for general use, but online prediction and GPU-enabled machines remain in beta.

Online prediction is now in the Beta launch stage. Its use is now subject to the Cloud ML Engine pricing policy, and follows the same pricing formula as batch prediction. While it remains in Beta, online prediction is not intended for use in critical applications.

The environments that Cloud ML Engine uses to train models and get predictions have been defined as Cloud ML Engine runtime versions. You can specify a supported runtime version to use when training, defining a model resource, or requesting batch predictions. The primary difference in runtime versions at this time is the version of TensorFlow supported by each, but more differences may arise over time. You can find the details in the runtime version list.

You can now run batch prediction jobs against TensorFlow SavedModels that are stored in Google Cloud Storage, not hosted as a model version in Cloud ML Engine. Instead of supplying a model or version ID when you create your job, you can use the URI of your SavedModel.

The Google Cloud Machine Learning SDK, formerly released as Alpha, is deprecated, and will no longer be supported effective May 7, 2017. Most of the functionality exposed by the SDK has moved to the new TensorFlow package, tf.Transform. You can use whatever technology or tool you like to preprocess your input data. However, we recommend tf.Transform as well as services that are available on Google Cloud Platform, including Google Cloud Dataflow, Google Cloud Dataproc, and Google BigQuery.

v1beta1 (September 29th, 2016)

Online prediction is an Alpha feature. Though Cloud Machine Learning Engine overall is in its Beta phase, online prediction is still undergoing significant changes to improve performance. You will not be charged for online prediction while it remains in Alpha.

Preprocessing and the rest of the Cloud ML Engine SDK are Alpha features. The SDK is undergoing active development to better integrate Cloud ML Engine with Apache Beam.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Machine Learning Engine (Cloud ML Engine)