Using the What-If Tool

You can use the What-If Tool (WIT) within notebook environments to inspect AI Platform models through an interactive dashboard. The What-If Tool integrates with TensorBoard, Jupyter notebooks, Colab notebooks, and JupyterHub. It is also pre-installed on AI Platform Notebooks TensorFlow instances.

This page explains how to use the What-If Tool with a trained model that is already deployed on AI Platform. Learn how to deploy models.

Install the What-If Tool

This shows how to install witwidget in a variety of notebook environments:

AI Platform Notebooks

witwidget is already installed in AI Platform Notebooks.

You can load the What-If Tool tutorials into your notebook instance by cloning the WIT tutorial repo into your instance. See how to clone a GitHub repo into a notebook instance.

Jupyter notebook

Install and enable WIT for Jupyter through the following commands:

pip install witwidget
jupyter nbextension install --py --symlink --sys-prefix witwidget
jupyter nbextension enable --py --sys-prefix witwidget

To use TensorFlow with GPU support (tensorflow-gpu), install the GPU-compatible version of witwidget:

pip install witwidget-gpu
jupyter nbextension install --py --symlink --sys-prefix witwidget
jupyter nbextension enable --py --sys-prefix witwidget

You only need to run these installation commands once in your Jupyter environment. Subsequently, every time you bring up a Jupyter kernel, the What-If Tool will be available.

Colab notebook

Install the widget into the runtime of the notebook kernel by running a cell containing:

!pip install witwidget

To use TensorFlow with GPU support (tensorflow-gpu), install the GPU-compatible version of witwidget:

!pip install witwidget-gpu

JupyterLab

Follow these instructions for JupyterLab outside of AI Platform Notebooks.

Install and enable WIT for JupyterLab by running a cell containing:

!pip install witwidget
!jupyter labextension install wit-widget
!jupyter labextension install @jupyter-widgets/jupyterlab-manager

For TensorFlow GPU support, use the witwidget-gpu package:

!pip install witwidget-gpu
!jupyter labextension install wit-widget
!jupyter labextension install @jupyter-widgets/jupyterlab-manager

Note that you may need to run !sudo jupyter labextension ... commands depending on your notebook setup.

Adjust prediction outputs for the What-If Tool

For any models where the output doesn't match what the What-If Tool requires, you need to define a prediction adjustment function in your code.

The What-If Tool requires the following input formats:

  • Classification model: A list of class scores
  • Regression model: A regression score

For example, an XGBoost binary classification model returns scores only for the positive class. Because the What-If Tool expects scores for each class, you'd use the following prediction adjustment function to get a list with both the negative and positive scores:

def adjust_prediction(pred):
  return [1 - pred, pred]

Define the prediction adjustment function within a notebook code cell before you configure the What-If Tool.

Configure the What-If Tool

You can use the What-If Tool to inspect one model, or to compare two models. In the WitConfigBuilder, fill in the appropriate values for your own Google Cloud project, your AI Platform model and version, and other values as appropriate for your model.

  1. Import witwidget:

    import witwidget
    
  2. Create a WitConfigBuilder to pass various parameters about your model and your AI Platform project to the What-If Tool. You can use the What-If Tool to inspect one model, or to compare two models. Fill in the appropriate values for your own Google Cloud project, your AI Platform model and version, and other values as appropriate for your model.

    Inspect a model

    This config builder shows how to use the What-If Tool to inspect one model. See the full example notebook with a deployed XGBoost model.

    See the code for the WitConfigBuilder for more details on each method.

    PROJECT_ID = 'YOUR_PROJECT_ID'
    MODEL_NAME = 'YOUR_MODEL_NAME'
    VERSION_NAME = 'YOUR_VERSION_NAME'
    TARGET_FEATURE = 'mortgage_status'
    LABEL_VOCAB = ['denied', 'approved']
    
    config_builder = (WitConfigBuilder(test_examples.tolist(), features.columns.tolist() + ['mortgage_status'])
      .set_ai_platform_model(PROJECT_ID, MODEL_NAME, VERSION_NAME, adjust_prediction=adjust_prediction)
      .set_target_feature(TARGET_FEATURE)
      .set_label_vocab(LABEL_VOCAB))
    

    Compare models

    This config builder shows how to use the What-If Tool to inspect and compare two models. See the full example notebook comparing deployed Keras and scikit-learn models.

    See the code for the WitConfigBuilder for more details on each method.

    PROJECT_ID = 'YOUR_PROJECT_ID'
    KERAS_MODEL_NAME = 'YOUR_KERAS_MODEL_NAME'
    KERAS_VERSION_NAME = 'VERSION_NAME_OF_YOUR_KERAS_MODEL'
    SKLEARN_MODEL_NAME = 'YOUR_SKLEARN_MODEL_NAME'
    SKLEARN_VERSION_NAME = 'VERSION_NAME_OF_YOUR_SKLEARN_MODEL'
    TARGET_FEATURE = 'quality'
    
    config_builder = (WitConfigBuilder(test_examples.tolist(), features.columns.tolist() + [TARGET_FEATURE])
     .set_ai_platform_model(PROJECT_ID, KERAS_MODEL_NAME, KERAS_VERSION_NAME)
     .set_predict_output_tensor('sequential').set_uses_predict_api(True)
     .set_target_feature(TARGET_FEATURE)
     .set_model_type('regression')
     .set_compare_ai_platform_model(PROJECT_ID, SKLEARN_MODEL_NAME, SKLEARN_VERSION_NAME))
    
  3. Pass the config builder to WitWidget, and set a display height. The What-If Tool displays as an interactive visualization within your notebook.

    WitWidget(config_builder, height=800)
    

What's next

Was this page helpful? Let us know how we did:

Send feedback about...

AI Platform
Need help? Visit our support page.