Jump to Content
AI & Machine Learning

Introducing the What-If Tool for Cloud AI Platform models

July 18, 2019
Sara Robinson

Developer Advocate, Google Cloud Platform

James Wexler

Software Engineer

Last year our TensorFlow team announced the What-If Tool, an interactive visual interface designed to help you visualize your datasets and better understand the output of your TensorFlow models. Today, we’re announcing a new integration with the What-If Tool to analyze your models deployed on AI Platform. In addition to TensorFlow models, you can also use the What-If Tool for your XGBoost and Scikit Learn models deployed on AI Platform.

As AI models grow in complexity, understanding the inner workings of a model makes it possible to explain and interpret the outcomes driven by AI. As a result, AI explainability has become a critical requirement for most organizations in industries like financial services, healthcare, media and entertainment, and technology. 

With this integration, AI Platform users can develop a deeper understanding of how their models work under different scenarios, and build rich visualizations to explain model performance to business users and other stakeholders of AI within an enterprise.

With just one method call, you can connect your AI Platform model to the What-If Tool:

Loading...

You can use this new integration from AI Platform Notebooks, Colab notebooks, or locally via Jupyter notebooks. In this post, we’ll walk you through an example using an XGBoost model deployed on AI Platform.

Getting started: deploying a model to AI Platform
In order to use this integration, you’ll need a model deployed on Cloud AI Platform. Once you’ve trained a model, you can deploy it to AI Platform using the gcloud CLI. If you don't yet have a Cloud account, we’ve got one notebook that runs the What-if Tool on a public Cloud AI Platform model so you can easily try out the integration before you deploy your own.

The XGBoost example we’ll be showing here is a binary classification model for predicting whether or not a mortgage application will be approved, trained on this public dataset. In order to deploy this model, we’ve exported it to a .bst model file (the format XGBoost uses) and uploaded this to a Cloud Storage bucket in our project. We can deploy it with this command (make sure to define the environment variables when you run this):

Loading...

Connecting your model to the What-If Tool
Once your model has been deployed, you can view its performance on a dataset in the What-If Tool by setting up a WitConfigBuilder object as shown in the code snippet above. Provide your test examples in the format expected by the model, whether that be a list of JSON dictionaries, JSON lists, or tf.Example protos. Your test examples should include the ground truth labels so you can explore how different features impact your model’s predictions. 

Point the tool at your model through your project name, model name, and model version, and optionally set the name of the feature in the dataset that the model is trying to predict. 

Additionally, if you want to compare the performance of two models on the same dataset, set the second model using the set_compare_ai_platform_model method. One of our demo notebooks shows you how to use this method to compare tf.keras and Scikit Learn models deployed on Cloud AI Platform.

Understanding What-If Tool visualizations
Click here for a full walkthrough of the features of the What-If Tool.

The initial view in the tool is the Datapoint Editor, which shows all examples in the provided dataset and their results from prediction through the model:

https://storage.googleapis.com/gweb-cloudblog-publish/images/Datapoint_Editor.max-1500x1500.png

Click on any example in the main panel to see its details in the left panel. You can change anything about the datapoint and run it again through the model to see how the changes affect prediction. The main panel can be organized into custom visualizations (confusion matrices, scatter plots, histograms, and more) using the dropdown menus at the top. Click the partial dependence plot option in the left panel to see how changing each feature individually for a datapoint causes the model results to change, or click the "Show nearest counterfactual datapoint" toggle to compare the selected datapoint to the most similar datapoint that the model predicted a different outcome for.

The Performance + Fairness tab shows aggregate model results over the entire dataset:

https://storage.googleapis.com/gweb-cloudblog-publish/images/Performance__Fairness.max-1400x1400.png

Additionally, you can slice your dataset by features and compare performance across those slices, identifying subsets of data on which your model performs best or worst, which can be very helpful for ML fairness investigations.

Using What-If Tool from AI Platform Notebooks
The WitWidget comes pre-installed in all TensorFlow instances of AI Platform Notebooks.

You can use it in exactly the same way as we’ve described above, by calling set_ai_platform_model to connect the What-If Tool to your deployed AI Platform models.

Start building
Want to start connecting your own AI Platform models to the What-If Tool? Check out these demos and resources:

  • Demo notebooks: these work on Colab, Cloud AI Platform Notebooks, and Jupyter. If you’re running them from AI Platform Notebooks, it will work best if you use one of the TensorFlow instance types.

  • What-If Tool: For a detailed walkthrough of all the What-If Tool features, check out their guide or the documentation.

We’re actively working on introducing more capabilities for model evaluation and understanding within the AI Platform to help you meaningfully interpret how your models make predictions, and build end user trust through model transparency. And if you use our new What-If Tool integration we’d love your feedback. Find us on Twitter at @SRobTweets and @bengiswex.

Posted in