Explainable AI
Tools and frameworks to understand and interpret your machine learning models.
Understand AI output and build trust
Design interpretable and inclusive AI
Build interpretable and inclusive AI systems from the ground up with tools designed to help detect and resolve bias, drift, and other gaps in data and models. AI Explanations in AutoML Tables, Vertex AI Predictions, and Notebooks provide data scientists with the insight needed to improve datasets or model architecture and debug model performance. The What-If Tool lets you investigate model behavior at a glance.
Deploy AI with confidence
Grow end-user trust and improve transparency with human-interpretable explanations of machine learning models. When deploying a model on AutoML Tables or Vertex AI, you get a prediction and a score in real time indicating how much a factor affected the final result. While explanations don’t reveal any fundamental relationships in your data sample or population, they do reflect the patterns the model found in the data.
Streamline model governance
Simplify your organization’s ability to manage and improve machine learning models with streamlined performance monitoring and training. Easily monitor the predictions your models make on Vertex AI. The continuous evaluation feature lets you compare model predictions with ground truth labels to gain continual feedback and optimize model performance.
Features
AI Explanations
Receive a score explaining how much each factor contributed to the model predictions in AutoML Tables, BigQuery ML, inside your notebook, or via Vertex AI Prediction API. Read the score explanation documentation.
What-If Tool
Investigate model performances for a range of features in your dataset, optimization strategies, and even manipulations to individual datapoint values using the What-If Tool integrated with Vertex AI.
Continuous evaluation
Sample the prediction from trained machine learning models deployed to Vertex AI. Provide ground truth labels for prediction inputs using the continuous evaluation capability. Data Labeling Service compares model predictions with ground truth labels to help you improve model performance.
Customers
Resources
-
Increasing transparency with Google Cloud AI Explanations
-
AI Explanations for Vertex AI
-
BigQuery ML features and capabilities
-
AutoML Tables features and capabilities
-
Explaining model predictions on image data
-
Explaining model predictions on structured data
-
Code samples for Explainable AI
-
AI Explainability Whitepaper
-
Putting AI principles into action
Pricing
Explainable AI tools are provided at no extra charge to users of AutoML Tables or Vertex AI. Note that Cloud AI is billed for node-hours usage, and running AI Explanations on model predictions will require compute and storage. Therefore, users of Explainable AI may see their node-hour usage increase.
Start building on Google Cloud with $300 in free credits and 20+ always free products.
Tools and frameworks to deploy interpretable and inclusive machine learning models.
Cloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.