Creating a custom prediction routine with scikit-learn

Colab logo Run this tutorial as a notebook in Colab GitHub logo View the notebook on GitHub

Overview

This tutorial shows how to deploy a trained scikit-learn model to AI Platform Prediction and serve predictions using a custom prediction routine. This lets you customize how AI Platform Prediction responds to each prediction request.

In this example, you will use a custom prediction routine to preprocess prediction input by scaling it, and to postprocess prediction output by converting class numbers to label strings.

The tutorial walks through several steps:

  • Training a simple scikit-learn model locally (in this notebook)
  • Creating and deploy a custom prediction routine to AI Platform Prediction
  • Serving prediction requests from that deployment

Dataset

This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.

This tutorial uses the copy of the Iris dataset included in the scikit-learn library.

Objective

The goal is to train a model that uses a flower's measurements as input to predict what type of iris it is.

This tutorial focuses more on using this model with AI Platform Prediction than on the design of the model itself.

Costs

This tutorial uses billable components of Google Cloud:

  • AI Platform Prediction
  • Cloud Storage

Learn about AI Platform Prediction pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin

You must do several things before you can train and deploy a model in AI Platform Prediction:

  • Set up your local development environment.
  • Set up a Google Cloud project with billing and the necessary APIs enabled.
  • Create a Cloud Storage bucket to store your training package and your trained model.

Set up your local development environment

You need the following to complete this tutorial:

  • Python 3
  • virtualenv
  • The Google Cloud SDK

The Google Cloud guide to Setting up a Python development environment provides detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:

  1. Install Python 3.

  2. Install virtualenv and create a virtual environment that uses Python 3.

  3. Activate that environment.

  4. Complete the steps in the following section to install the Google Cloud SDK.

Set up your Google Cloud project

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the AI Platform Training & Prediction and Compute Engine APIs.

    Enable the APIs

  5. Install the Google Cloud CLI.
  6. To initialize the gcloud CLI, run the following command:

    gcloud init
  7. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  8. Make sure that billing is enabled for your Google Cloud project.

  9. Enable the AI Platform Training & Prediction and Compute Engine APIs.

    Enable the APIs

  10. Install the Google Cloud CLI.
  11. To initialize the gcloud CLI, run the following command:

    gcloud init

Authenticate your GCP account

To set up authentication, you need to create a service account key and set an environment variable for the file path to the service account key.

  1. Create a service account:

    1. In the Google Cloud console, go to the Create service account page.

      Go to Create service account

    2. In the Service account name field, enter a name.
    3. Optional: In the Service account description field, enter a description.
    4. Click Create.
    5. Click the Select a role field. Under All roles, select AI Platform > AI Platform Admin.
    6. Click Add another role.
    7. Click the Select a role field. Under All roles, select Storage > Storage Object Admin.

    8. Click Done to create the service account.

      Do not close your browser window. You will use it in the next step.

  2. Create a service account key for authentication:

    1. In the Google Cloud console, click the email address for the service account that you created.
    2. Click Keys.
    3. Click Add key, then Create new key.
    4. Click Create. A JSON key file is downloaded to your computer.
    5. Click Close.
  3. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. This variable only applies to your current shell session, so if you open a new session, set the variable again.

Create a Cloud Storage bucket

To deploy a custom prediction routine, you must upload your trained model artifacts and your custom code to Cloud Storage.

Set the name of your Cloud Storage bucket as an environment variable. It must be unique across all Cloud Storage buckets:

BUCKET_NAME="your-bucket-name"

Select a region where AI Platform Prediction is available and create another environment variable.

REGION="us-central1"

Create your Cloud Storage bucket in this region and, later, use the same region for training and prediction. Run the following command to create the bucket if it doesn't already exist:

gcloud storage buckets create gs://$BUCKET_NAME --location=$REGION

Building and training a scikit-learn model

Often, you can't use your data in its raw form to train a machine learning model. Even when you can, preprocessing the data before using it for training can sometimes improve your model.

Assuming that you expect the input for prediction to have the same format as your training data, you must apply identical preprocessing during training and prediction to ensure that your model makes consistent predictions.

In this section, create a preprocessing module and use it as part of training. Then export a preprocessor with characteristics learned during training to use later in your custom prediction routine.

Install dependencies for local training

Training locally requires several dependencies:

pip install numpy>=1.16.0 scikit-learn==0.20.2

Write your preprocessor

Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.

Create preprocess.py, which contains a class to do this scaling:

import numpy as np

class MySimpleScaler(object):
  def __init__(self):
    self._means = None
    self._stds = None

  def preprocess(self, data):
    if self._means is None: # during training only
      self._means = np.mean(data, axis=0)

    if self._stds is None: # during training only
      self._stds = np.std(data, axis=0)
      if not self._stds.all():
        raise ValueError('At least one column has standard deviation of 0.')

    return (data - self._means) / self._stds

Notice that an instance of MySimpleScaler saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.

This lets you store characteristics of the training distribution and use them for identical preprocessing at prediction time.

Train your model

Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.

At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:

import pickle

from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib

from preprocess import MySimpleScaler

iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target

model = RandomForestClassifier()
model.fit(X, y)

joblib.dump(model, 'model.joblib')
with open ('preprocessor.pkl', 'wb') as f:
  pickle.dump(scaler, f)

Deploying a custom prediction routine

To deploy a custom prediction routine to serve predictions from your trained model, do the following:

  • Create a custom predictor to handle requests
  • Package your predictor and your preprocessing module
  • Upload your model artifacts and your custom code to Cloud Storage
  • Deploy your custom prediction routine to AI Platform Prediction

Create a custom predictor

To deploy a custom prediction routine, you must create a class that implements the Predictor interface. This tells AI Platform Prediction how to load your model and how to handle prediction requests.

Write the following code to predictor.py:

import os
import pickle

import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib

class MyPredictor(object):
  def __init__(self, model, preprocessor):
    self._model = model
    self._preprocessor = preprocessor
    self._class_names = load_iris().target_names

  def predict(self, instances, **kwargs):
    inputs = np.asarray(instances)
    preprocessed_inputs = self._preprocessor.preprocess(inputs)
    if kwargs.get('probabilities'):
      probabilities = self._model.predict_proba(preprocessed_inputs)
      return probabilities.tolist()
    else:
      outputs = self._model.predict(preprocessed_inputs)
      return [self._class_names[class_num] for class_num in outputs]

  @classmethod
  def from_path(cls, model_dir):
    model_path = os.path.join(model_dir, 'model.joblib')
    model = joblib.load(model_path)

    preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
    with open(preprocessor_path, 'rb') as f:
      preprocessor = pickle.load(f)

    return cls(model, preprocessor)

Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the prediction output from class indexes (0, 1, or 2) into label strings (the name of the flower type).

However, if the predictor receives a probabilities keyword argument with the value True, it returns a probability array instead, denoting the probability that each of the three classes is the correct label (according to the model). The last part of this tutorial shows how to provide a keyword argument during prediction.

Package your custom code

You must package predictor.py and preprocess.py as a .tar.gz source distribution package and provide the package to AI Platform Prediction so it can use your custom code to serve predictions.

Write the following setup.py to define your package:

from setuptools import setup

setup(
    name='my_custom_code',
    version='0.1',
    scripts=['predictor.py', 'preprocess.py'])

Then run the following command to createdist/my_custom_code-0.1.tar.gz:

python setup.py sdist --formats=gztar

Upload model artifacts and custom code to Cloud Storage

Before you can deploy your model for serving, AI Platform Prediction needs access to the following files in Cloud Storage:

  • model.joblib (model artifact)
  • preprocessor.pkl (model artifact)
  • my_custom_code-0.1.tar.gz (custom code)

Model artifacts must be stored together in a model directory, which your Predictor can access as the model_dir argument in its from_path class method. The custom code does not need to be in the same directory. Run the following commands to upload your files:

gcloud storage cp ./dist/my_custom_code-0.1.tar.gz gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz
gcloud storage cp model.joblib preprocessor.pkl gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/

Deploy your custom prediction routine

Create a model resource and a version resource to deploy your custom prediction routine. First define environment variables with your resource names:

MODEL_NAME='IrisPredictor'
VERSION_NAME='v1'

Then create your model:

gcloud ai-platform models create $MODEL_NAME \
  --regions $REGION

Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage:

gcloud components install beta

gcloud beta ai-platform versions create $VERSION_NAME \
  --model $MODEL_NAME \
  --runtime-version 1.13 \
  --python-version 3.5 \
  --origin gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/ \
  --package-uris gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
  --prediction-class predictor.MyPredictor

Learn more about the options you must specify when you deploy a custom prediction routine.

Serving online predictions

Try out your deployment by sending an online prediction request. First, install the Google API Client Library for Python:

pip install --upgrade google-api-python-client

Then send two instances of iris data to your deployed version by running the following Python code:

import googleapiclient.discovery

instances = [
  [6.7, 3.1, 4.7, 1.5],
  [4.6, 3.1, 1.5, 0.2],
]

service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)

response = service.projects().predict(
    name=name,
    body={'instances': instances}
).execute()

if 'error' in response:
    raise RuntimeError(response['error'])
else:
  print(response['predictions'])
['versicolor', 'setosa']

Sending keyword arguments

When you send a prediction request to a custom prediction routine, you can provide additional fields on your request body. The Predictor's predict method receives these as fields of the **kwargs dictionary.

The following code sends the same request as before, but this time it adds a probabilities field to the request body:

response = service.projects().predict(
    name=name,
    body={'instances': instances, 'probabilities': True}
).execute()

if 'error' in response:
    raise RuntimeError(response['error'])
else:
  print(response['predictions'])
[[0.0, 1.0, 0.0], [1.0, 0.0, 0.0]]

Cleaning up

To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.

Alternatively, you can clean up individual resources by running the following commands:

# Delete version resource
gcloud ai-platform versions delete $VERSION_NAME --quiet --model $MODEL_NAME

# Delete model resource
gcloud ai-platform models delete $MODEL_NAME --quiet

# Delete Cloud Storage objects that were created
gcloud storage rm gs://$BUCKET_NAME/custom_prediction_routine_tutorial --recursive

What's next