Explainable AI

Tools and frameworks to understand and interpret your machine learning models.

Understand AI output and build trust image

Understand AI output and build trust

Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models. With it, you can debug and improve model performance, and help others understand your models' behavior. You can also generate feature attributions for model predictions in AutoML Tables and Vertex AI, and visually investigate model behavior using the What-If Tool.
Design interpretable and inclusive AI

Design interpretable and inclusive AI

Build interpretable and inclusive AI systems from the ground up with tools designed to help detect and resolve bias, drift, and other gaps in data and models. AI Explanations in AutoML Tables, Vertex AI Predictions, and Notebooks provide data scientists with the insight needed to improve datasets or model architecture and debug model performance. The What-If Tool lets you investigate model behavior at a glance.

Deploy AI with confidence image

Deploy AI with confidence

Grow end-user trust and improve transparency with human-interpretable explanations of machine learning models. When deploying a model on AutoML Tables or Vertex AI, you get a prediction and a score in real time indicating how much a factor affected the final result. While explanations don’t reveal any fundamental relationships in your data sample or population, they do reflect the patterns the model found in the data.

Streamline model governance

Streamline model governance

Simplify your organization’s ability to manage and improve machine learning models with streamlined performance monitoring and training. Easily monitor the predictions your models make on Vertex AI. The continuous evaluation feature lets you compare model predictions with ground truth labels to gain continual feedback and optimize model performance.

Features

AI Explanations

Receive a score explaining how much each factor contributed to the model predictions in AutoML Tables, inside your Notebook, or via Vertex AI Prediction API. Read the score explanation documentation.

What-If Tool

Investigate model performances for a range of features in your dataset, optimization strategies, and even manipulations to individual datapoint values using the What-If Tool integrated with Vertex AI.

Continuous evaluation

Sample the prediction from trained machine learning models deployed to Vertex AI. Provide ground truth labels for prediction inputs using the continuous evaluation capability. Data Labeling Service compares model predictions with ground truth labels to help you improve model performance.

Customers

Pricing

Explainable AI tools are provided at no extra charge to users of AutoML Tables or Vertex AI. Note that Cloud AI is billed for node-hours usage, and running AI Explanations on model predictions will require compute and storage. Therefore, users of Explainable AI may see their node-hour usage increase.

View pricing details

Take the next step

Start building on Google Cloud with $300 in free credits and 20+ always free products.

Need help getting started?
Work with a trusted partner
Continue browsing

Cloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.