Increasing transparency with Google Cloud Explainable AI
Tracy Frey
Director, Product Strategy & Operations, Cloud AI
June marked the first anniversary of Google’s AI Principles, which formally outline our pledge to explore the potential of AI in a respectful, ethical and socially beneficial way. For Google Cloud, they also serve as an ongoing commitment to our customers—the tens of thousands of businesses worldwide who rely on Google Cloud AI every day—to deliver the transformative capabilities they need to thrive while aiming to help improve privacy, security, fairness, and the trust of their users.
We strive to build AI aligned with our AI Principles and we’re excited to introduce Explainable AI, which helps humans understand how a machine learning model reaches its conclusions.
Increasing interpretability of AI with Explainable AI
AI can unlock new ways to make businesses more efficient and create new opportunities to delight customers. That said, as with any new data-driven decision making tool, it can be a challenge to bring machine learning models into a business.
Machine learning models can identify intricate correlations between enormous numbers of data points. While this capability allows AI models to reach incredible accuracy, inspecting the structure or weights of a model often tells you little about a model’s behavior. This means that for some decision makers, particularly those in industries where confidence is critical, the benefits of AI can be out of reach without interpretability.
This is why we are excited to announce our latest step in improving the interpretability of AI with Google Cloud AI Explanations. Explanations quantifies each data factor’s contribution to the output of a machine learning model. These summaries help enterprises understand why the model made the decisions it did. You can use this information to further improve your models or share useful insights with the model’s consumers.
Of course, any explanation method has limitations. For one, AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about its limitations.
We have received positive feedback from customers who are looking forward to applying AI Explanations:
Sky
“Understanding how models arrive at their decisions is critical for the use of AI in our industry. We are excited to see the progress made by Google Cloud to solve this industry challenge. With tools like What-If Tool, and feature attributions in AI Platform, our data scientists can build models with confidence, and provide human-understandable explanations.” —Stefan Hoejmose, Head of Data Journeys, Sky
Vivint Smart Home
"Model interpretability is critical to our ability to optimize AI and solve the problem in the best possible way. Google is pushing the envelope in Explainable AI through research and development. And with Google Cloud, we’re getting tried and tested technologies to solve the challenge of model interpretability and uplevel our data science capabilities." —Aaron Davis, Chief Data Scientist, Vivint Smart Home
Wellio
“Introspection of models is essential for both model development and deployment. Oftentimes we tend to focus too much on predictive skill when in reality it's the more explainable model that is usually the most useful, and more importantly, the most trusted. We are excited to see these new tools made by Google Cloud, supporting both our data scientists and also our models’ customers.” —Erik Andrejko, CTO, Wellio
iRobot
“We are leveraging neural networks to develop capabilities for future products. Easy-to-use, high-quality solutions that improve the training of our deep learning models are a prerogative for our efforts. We are excited to see the progress made by Google Cloud to solve the problem of feature attributions and provide human-understandable explanations to what our models are doing.“ —Chris Jones, Chief Technology Officer, iRobot
Explainable AI consists of tools and frameworks to deploy interpretable and inclusive machine learning models. AI Explanations for models hosted on AutoML Tables and Cloud AI Platform Predictions are available now. You can even pair AI Explanations with our popular What-If Tool to get a complete picture of your model’s behavior--check out this blog post for more information. To start making your own AI deployments more understandable with Explainable AI, please visit: https://cloud.google.com/explainable-ai.
Expanding our Responsible AI efforts
Alongside tools and frameworks like AI Explanations, we continue to seek new ways to align our work with the AI Principles. This includes efforts focused on increasing transparency, and today we’re introducing model cards, starting with examples for two features of our Cloud Vision API, Face Detection and Object Detection. Inspired by the January 2019 academic paper, Model Cards for Model Reporting, these “cards” are “short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions.” Our aim with these first model card examples is to provide practical information about models’ performance and limitations in order to help developers make better decisions about what models to use for what purpose and how to deploy them responsibly. For more information and to share your feedback, we encourage you to visit modelcards.withgoogle.com.
Additionally, we believe deeply in building our products with responsible use of AI as a core part of our development process. As we’ve shared previously, we’ve developed a process to support aligning our work with the AI Principles and we’ve now begun working with customers as they seek to create and support such processes for their own organizations. Our collective support of each other’s efforts will lead to more successful deployed AI, and we’ve been thrilled to work closely with HSBC—one of the world’s largest banks and a continual innovator in financial services—in this effort. By engaging in this joint process, we were able to share relevant expertise across our organizations. HSBC was impressed by the rigor and analysis we brought to the review process, as well as our commitment to ensuring safe, ethical and fair outcomes, and quickly recognized the value of such an approach. In parallel, HSBC has been developing their own responsible AI review process to ensure their future AI initiatives benefit from the same guidance and reliability as their Google Cloud deployments.
Ongoing commitment
It’s been an exciting year for so many aspects of AI, but the most inspiring breakthroughs aren’t always about technology. Sometimes, it’s the strides we take toward increased fairness and inclusion that make the biggest difference. That’s why, as the power and scope of AI increases, we remain committed to ensuring it serves all of us. Responsible AI isn’t a destination, but a journey we share with our customers and users. We hope you’ll join us.
For more information about our Responsible AI Practices, please visit: https://ai.google/responsibilities/responsible-ai-practices/