Vertex AI V1 API - Class Google::Cloud::AIPlatform::V1::ModelMonitoringObjectiveConfig::ExplanationConfig (v0.37.0)

Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ModelMonitoringObjectiveConfig::ExplanationConfig.

The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#enable_feature_attributes

def enable_feature_attributes() -> ::Boolean
Returns
  • (::Boolean) — If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.

#enable_feature_attributes=

def enable_feature_attributes=(value) -> ::Boolean
Parameter
  • value (::Boolean) — If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
Returns
  • (::Boolean) — If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.

#explanation_baseline

def explanation_baseline() -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringObjectiveConfig::ExplanationConfig::ExplanationBaseline
Returns

#explanation_baseline=

def explanation_baseline=(value) -> ::Google::Cloud::AIPlatform::V1::ModelMonitoringObjectiveConfig::ExplanationConfig::ExplanationBaseline
Parameter
Returns