Release Notes

This page documents production updates to Cloud ML Engine. You can periodically check this page for announcements about new or updated features, bug fixes, known issues, and deprecated functionality.

June 13, 2018

TPU training support for Cloud ML runtime version 1.6 ended on June 13, 2018. See the currently supported versions in the Runtime Version List.

May 29, 2018

You can now use Cloud TPU (Beta) with TensorFlow 1.8 and Cloud ML Engine runtime version 1.8.

Background information: Cloud TPU became available in Cloud ML Engine on May 14th in runtime versions 1.6 and 1.7. Last week saw the release of runtime version 1.8, but at that time Cloud TPU was not yet available with TensorFlow 1.8. Now it is. See how to use TPUs to train your models on Cloud ML Engine.

May 16, 2018

The Cloud ML Engine runtime version 1.8 is now available for training and prediction. This version supports TensorFlow 1.8 and other packages as listed in the runtime version list.

May 15, 2018

You can now update the minimum number of nodes for autoscaling on an existing model version, as well as specifying the attribute when creating a new version.

May 14, 2018

Cloud ML Engine now offers Cloud TPU (Beta) for training TensorFlow models. Tensor Processing Units (TPUs) are Google’s custom-developed ASICs, used to accelerate machine-learning workloads. See how to use TPUs to train your models on Cloud ML Engine.

April 26, 2018

The Cloud ML Engine runtime version 1.7 is now available for training and prediction. This version supports TensorFlow 1.7 and other packages as listed in the runtime version list.

April 16, 2018

Hyperparameter algorithms: When tuning the hyperparameters in your training job, you can now specify a search algorithm in the HyperparameterSpec. Available values are:

  • GRID_SEARCH: A simple grid search within the feasible space. This option is particularly useful if you want to specify a number of trials that is more than the number of points in the feasible space. In such cases, if you do not specify a grid search, the Cloud ML Engine default algorithm may generate duplicate suggestions. To use grid search, all parameters must be of type INTEGER, CATEGORICAL, or DISCRETE.
  • RANDOM_SEARCH: A simple random search within the feasible space.

If you do not specify an algorithm, your job uses the default Cloud ML Engine algorithm, which drives the parameter search to arrive at the optimal solution with a more effective search over the parameter space. For more about hyperparameter tuning, see the hyperparameter tuning overview.

April 5, 2018

Cloud ML Engine now supports scikit-learn and XGBoost for online prediction. This feature is in Beta.

  • Set framework on projects.models.versions.create to specify your machine learning framework when creating a model version. Valid values are TENSORFLOW, SCIKIT_LEARN, and XGBOOST. The default is TENSORFLOW. If you specify SCIKIT_LEARN or XGBOOST, you must also set runtimeVersion to 1.4 or greater on the model version.
  • See the guide to scikit-learn and XGBoost on Cloud ML Engine.

Python 3.5 is available for online prediction.

March 20, 2018

The Cloud ML Engine runtime version 1.6 is now available for training and prediction. This version supports TensorFlow 1.6 and other packages as listed in the runtime version list.

March 13, 2018

The Cloud ML Engine runtime version for TensorFlow 1.5 is now available for training and prediction. For more information, see the Runtime Version List.

February 8, 2018

Added new features for hyperparameter tuning: automated early stopping of trials, resuming a previous hyperparameter tuning job, and additional efficiency optimizations when you run similar jobs. For more information, See the hyperparameter tuning overview.

December 14, 2017

The Cloud ML Engine runtime version for TensorFlow 1.4 is now available for training and prediction. For more information, see the Runtime Version List.

Python 3 is now available for training as part of the Cloud ML Engine runtime version for TensorFlow 1.4. For more information, see the Runtime Version List.

Online prediction is now generally available for single core serving. See the guide to online prediction and the blog post.

Pricing has been reduced and simplified for both training and prediction. See the pricing details, the blog post, and the comparison of old and current prices in the pricing FAQ.

P100 GPUs are now in Beta. Using P100 GPUs now incurs charges. For more information, see Using GPUs and Pricing.

October 26, 2017

Audit logging for Cloud ML Engine is now in Beta. For more information, see Viewing Audit Logs.

September 25, 2017

Predefined IAM roles for Cloud ML Engine are available for general use. For more information, see Access Control.

June 27, 2017

The Cloud ML Engine runtime version for TensorFlow 1.2 is now available for training and prediction. For more information, see the Runtime Version List.

The older runtime versions with TensorFlow 0.11 and 0.12 are no longer supported on Cloud ML Engine. For more information, see the Runtime Version List and the support timelines for older runtime versions.

May 9, 2017

Announced general availability of GPU-enabled machines. For more information, see Using GPUs for Training Models in the Cloud.

April 27, 2017

GPUs are now available in the us-central1 region. For the full list of regions that support GPUs, see Using GPUs for Training Models in the Cloud.

v1 (March 8th, 2017)

Announced general availability of Cloud Machine Learning Engine. Version 1 of Cloud ML Engine is available for general use for training models, deploying models, and generating batch predictions. The hyperparameter tuning feature is also available for general use, but online prediction and GPU-enabled machines remain in beta.

Online prediction is now in the Beta launch stage. Its use is now subject to the Cloud ML Engine pricing policy, and follows the same pricing formula as batch prediction. While it remains in Beta, online prediction is not intended for use in critical applications.

The environments that Cloud ML Engine uses to train models and get predictions have been defined as Cloud ML Engine runtime versions. You can specify a supported runtime version to use when training, defining a model resource, or requesting batch predictions. The primary difference in runtime versions at this time is the version of TensorFlow supported by each, but more differences may arise over time. You can find the details in the runtime version list.

You can now run batch prediction jobs against TensorFlow SavedModels that are stored in Google Cloud Storage, not hosted as a model version in Cloud ML Engine. Instead of supplying a model or version ID when you create your job, you can use the URI of your SavedModel.

The Google Cloud Machine Learning SDK, formerly released as Alpha, is deprecated, and will no longer be supported effective May 7, 2017. Most of the functionality exposed by the SDK has moved to the new TensorFlow package, tf.Transform. You can use whatever technology or tool you like to preprocess your input data. However, we recommend tf.Transform as well as services that are available on Google Cloud Platform, including Google Cloud Dataflow, Google Cloud Dataproc, and Google BigQuery.

v1beta1 (September 29th, 2016)

Online prediction is an Alpha feature. Though Cloud Machine Learning Engine overall is in its Beta phase, online prediction is still undergoing significant changes to improve performance. You will not be charged for online prediction while it remains in Alpha.

Preprocessing and the rest of the Cloud ML Engine SDK are Alpha features. The SDK is undergoing active development to better integrate Cloud ML Engine with Apache Beam.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Machine Learning Engine (Cloud ML Engine)