December 14, 2017
The Cloud ML Engine runtime version for TensorFlow 1.4 is now available for training and prediction. For more information, see the Runtime Version List.
Python 3 is now available for training as part of the Cloud ML Engine runtime version for TensorFlow 1.4. For more information, see the Runtime Version List.
October 26, 2017
Audit logging for Cloud ML Engine is now in Beta. For more information, see Viewing Audit Logs.
September 25, 2017
Predefined IAM roles for Cloud ML Engine are available for general use. For more information, see Access Control.
June 27, 2017
The Cloud ML Engine runtime version for TensorFlow 1.2 is now available for training and prediction. For more information, see the Runtime Version List.
The older runtime versions with TensorFlow 0.11 and 0.12 are no longer supported on Cloud ML Engine. For more information, see the Runtime Version List and the support timelines for older runtime versions.
May 9, 2017
Announced general availability of GPU-enabled machines. For more information, see Using GPUs for Training Models in the Cloud.
April 27, 2017
GPUs are now available in the us-central1 region. For the full list of regions that support GPUs, see Using GPUs for Training Models in the Cloud.
v1 (March 8th, 2017)
Announced general availability of Cloud Machine Learning Engine. Version 1 of Cloud ML Engine is available for general use for training models, deploying models, and generating batch predictions. The hyperparameter tuning feature is also available for general use, but online prediction and GPU-enabled machines remain in beta.
Online prediction is now in the Beta launch stage. Its use is now subject to the Cloud ML Engine pricing policy, and follows the same pricing formula as batch prediction. While it remains in Beta, online prediction is not intended for use in critical applications.
The environments that Cloud ML Engine uses to train models and get predictions have been defined as Cloud ML Engine runtime versions. You can specify a supported runtime version to use when training, defining a model resource, or requesting batch predictions. The primary difference in runtime versions at this time is the version of TensorFlow supported by each, but more differences may arise over time. You can find the details in the runtime version list.
You can now run batch prediction jobs against TensorFlow SavedModels that are stored in Google Cloud Storage, not hosted as a model version in Cloud ML Engine. Instead of supplying a model or version ID when you create your job, you can use the URI of your SavedModel.
The Google Cloud Machine Learning SDK, formerly released as Alpha, is
deprecated, and will no longer be supported effective May 7, 2017. Most of
the functionality exposed by the SDK has moved to
the new TensorFlow package,
You can use whatever technology or tool you like to preprocess your input
data. However, we recommend
tf.Transform as well as services
that are available on Google Cloud Platform, including Google Cloud Dataflow,
Google Cloud Dataproc, and Google BigQuery.
v1beta1 (September 29th, 2016)
Online prediction is an Alpha feature. Though Cloud Machine Learning Engine overall is in its Beta phase, online prediction is still undergoing significant changes to improve performance. You will not be charged for online prediction while it remains in Alpha.
Preprocessing and the rest of the Cloud ML Engine SDK are Alpha features. The SDK is undergoing active development to better integrate Cloud ML Engine with Apache Beam.