Managing Runtime Versions

Cloud Machine Learning Engine uses images to configure the VMs that service your training and prediction requests in the cloud. These images contain the base operating system, core technology packages, pip packages (Python libraries), and operating system packages. Images are upgraded periodically to include new improvements and features. Cloud ML Engine versioning enables you to select the right configuration to work with your model.

Important notes about versioning

  • The default Cloud ML Engine runtime version used by the Cloud Machine Learning Engine API is version 1.0. If you do not specify a runtime version, Cloud ML Engine uses version 1.0.
  • You should always test your training jobs and models thoroughly when switching to a new runtime version, regardless of whether it's a major or minor update.

Understanding version numbers

The images that Cloud ML Engine uses correspond to the Cloud ML Engine runtime version. The runtime version uses the following format:

major_version.minor_version

Major and minor versions

New major and minor versions are created periodically to incorporate one or more of the following:

  • Releases for:
    • Operating system
    • TensorFlow, scikit-learn, and XGBoost
  • Changes or updates to Cloud ML Engine functionality.

A new major version may include breaking changes that require updates to code written against previous versions. A new minor version should not include breaking changes, and should be backward-compatible with all variations of the same major version.

Selecting runtime versions

Make sure to select the runtime version that supports the latest versions of your machine learning framework and other packages you are using. If you don't specify a runtime version for the tasks below, Cloud ML Engine uses the default version 1.0 to complete your request.

You can see the details of each version in the Cloud ML Engine version list.

To specify the runtime version for a training job

Make sure to set the runtime version when you submit a training job request. Otherwise, Cloud ML Engine uses the default version 1.0 for your training job.

gcloud

Use the --runtime-version flag when you run the gcloud ml-engine jobs submit training command.

gcloud ml-engine jobs submit training my_job \
    --module-name trainer.task \
    --job-dir gs://my/training/job/directory \
    --package-path /path/to/my/project/trainer \
    --region us-central1
    --runtime-version 1.10

Python

Set the runtimeVersion when you define your training job request:

training_inputs = {'scaleTier': 'BASIC',
    'packageUris': ['gs://my/trainer/path/package-0.0.0.tar.gz'],
    'pythonModule': 'trainer.task'
    'args': ['--arg1', 'value1', '--arg2', 'value2'],
    'region': 'us-central1',
    'jobDir': 'gs://my/training/job/directory',
    'runtimeVersion': '1.10',
    'pythonVersion': '3.5'}

job_spec = {'jobId': my_job_name, 'trainingInput': training_inputs}

See more details about submitting a training job in the TrainingInput API.

To specify the Python version for a training job

Python 3.5 is available when you use Cloud ML Engine runtime version 1.4 or greater. To submit a training job with Python 3.5, set the Python version to "3.5" and the runtime version to 1.4 or greater.

If the Python version is not specified, it defaults to "2.7".

gcloud

Use the --python-version flag to specify Python version 3.5, and make sure to set the runtime version to 1.4 or greater:

gcloud ml-engine jobs submit training my_job \
    --module-name trainer.task \
    --job-dir gs://my/training/job/directory \
    --package-path /path/to/my/project/trainer \
    --python-version 3.5
    --region us-central1 \
    --runtime-version 1.10

Python

Set the runtimeVersion to '1.4' or greater, and set pythonVersion to '3.5':

training_inputs = {'scaleTier': 'BASIC',
    'packageUris': ['gs://my/trainer/path/package-0.0.0.tar.gz'],
    'pythonModule': 'trainer.task'
    'args': ['--arg1', 'value1', '--arg2', 'value2'],
    'region': 'us-central1',
    'jobDir': 'gs://my/training/job/directory',
    'runtimeVersion': '1.10',
    'pythonVersion': '3.5'}

job_spec = {'jobId': my_job_name, 'trainingInput': training_inputs}

See more details about submitting a training job in the TrainingInput API.

To specify the runtime version for a model version

Make sure to specify a runtime version when you create a deployed model version from a trained model. This sets the default runtime version for online and batch prediction requests. If you do not specify a runtime version, Cloud ML Engine uses version 1.0.

gcloud

Use the --runtime-version flag when you run the gcloud ml-engine versions create command:

gcloud ml-engine versions create version_name \
    --model model_name
    --origin gs://my/trained/model/path
    --runtime-version 1.10

Python

Set the runtimeVersion when you define your Version resource:

versionDef = {'name' = 'v1',
    'description' = 'The first iteration of the completely_made_up model',
    'deploymentUri' = 'gs://my/model/output/directory',
    'runtimeVersion' = '1.10'}
 

To specify a runtime version to use for batch prediction

You can specify a runtime version to use when you create a batch prediction job. If you don't, Cloud ML Engine uses the default runtime version set in the model version.

gcloud

Use the --runtime-version flag when you run the gcloud ml-engine jobs submit prediction command:

gcloud ml-engine jobs submit prediction my_batch_job_333 \
    --model my_model \
    --input-paths gs://my/cloud/storage/data/path/* \
    --output-path gs://my/cloud/storage/data/output/path
    --region us-central1 \
    --data-format JSON \
    --runtime-version 1.10

Python

Set the runtimeVersion in PredictionInput:

body = {
    'jobId': 'my_batch_job_333',
    'predictionInput': {
        'dataFormat': 'JSON',
        'inputPaths': ['gs://my/cloud/storage/data/path/*'],
        'outputPath': 'gs://my/cloud/storage/data/output/path',
        'region': 'us-central1',
        'modelName': 'projects/my_project/models/my_model',
        'runtimeVersion': '1.10'}}

Runtime versions for online prediction

When you create your model version, make sure to specify the runtime version you want to use for online prediction requests. If your model version's default runtime version is incorrect, create a new model version with the correct runtime version.

Online prediction requests always use the model version's default runtime version. You cannot specify a runtime version to override this in your online prediction request.

Specifying variant packages (training only)

There are two ways for your to change the packages on your training instances: manually uploading package files (tarballs) and including their paths as training input, and specifying PyPI packages as dependencies to your trainer package.

To provide package files

You can include extra package files as part of your training job request. These will be installed on each training instance. Cloud ML Engine installs all packages with pip. Packages designed for other package managers are not supported.

gcloud

Use the --packages flag when you run the gcloud ml-engine jobs submit training command. Set the value to a comma-separated list of the paths to all additional packages. Note that the list can contain no whitespace between entries.

gcloud ml-engine jobs submit training my_job \
    --staging-bucket gs://my-bucket \
    --package-path /path/to/my/project/trainer \
    --module-name trainer.task \
    --packages dep1.tar.gz,dep2.whl

Python

Add all additional packages to the list you use for the value of packageUris in the TrainingInput object.

training_inputs = {'scaleTier': 'BASIC',
    'packageUris': ['gs://my/trainer/path/package-0.0.0.tar.gz',
                    'gs://my/dependencies/path/dep1.tar.gz',
                    'gs://my/dependencies/path/dep2.whl'],
    'pythonModule': 'trainer.task'
    'args': ['--arg1', 'value1', '--arg2', 'value2'],
    'region': 'us-central1',
    'jobDir': 'gs://my/training/job/directory',
    'runtimeVersion': '1.10'}

    job_spec = {'jobId': my_job_name, 'trainingInput’: training_inputs}

To include PyPI package dependencies

You can specify PyPI packages and their versions as dependencies to your trainer package using the normal setup tools process:

  1. In the top-level directory of your trainer application, include a setup.py file.
  2. When you call setuptools.setup in setup.py, pass a list of dependencies and optionally their versions as the install_requires parameter. This example setup.py file demonstrates the procedure:

    from setuptools import find_packages
    from setuptools import setup
    
    REQUIRED_PACKAGES = ['some_PyPI_package>=1.5',
                         'another_package==2.6']
    
    setup(
        name='trainer',
        version='0.1',
        install_requires=REQUIRED_PACKAGES,
        packages=find_packages(),
        include_package_data=True,
        description='Generic example trainer package with dependencies.')
    

Cloud ML Engine forces reinstallation of packages, so you can override packages that are part of the runtime version's image with newer or older versions.

Specifying custom versions of TensorFlow for training

Using a more recent version of TensorFlow than the latest supported runtime version on Cloud ML Engine is possible for training, but not for prediction.

To use a version of TensorFlow that is not yet supported as a full Cloud ML Engine runtime version, include it as a custom dependency for your trainer using one of the following approaches:

  1. Specify the TensorFlow version in your setup.py file as a PyPI dependency. Include it in your list of required packages as follows:

     REQUIRED_PACKAGES = ['tensorflow>=1.10']
    
  2. Build a TensorFlow binary from sources, making sure to follow the instructions for TensorFlow with CPU support only. This process yields a pip package (.whl file) that you can include in your training job request by adding it to your list of packages.

Building a TensorFlow binary to include as a custom package is a more complex approach, but the advantage is that you can use the most recent TensorFlow updates when training your model.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud ML Engine for TensorFlow