Class Google::Cloud::AIPlatform::V1::ExplanationParameters (v0.1.0)

Stay organized with collections Save and categorize content based on your preferences.

Parameters to configure explaining for Model's predictions.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#integrated_gradients_attribution

def integrated_gradients_attribution() -> ::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution
Returns

#integrated_gradients_attribution=

def integrated_gradients_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution
Parameter
Returns

#output_indices

def output_indices() -> ::Google::Protobuf::ListValue
Returns
  • (::Google::Protobuf::ListValue) — If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

    If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indeices is populated, returns the argmax index of the outputs.

    Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

#output_indices=

def output_indices=(value) -> ::Google::Protobuf::ListValue
Parameter
  • value (::Google::Protobuf::ListValue) — If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

    If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indeices is populated, returns the argmax index of the outputs.

    Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

Returns
  • (::Google::Protobuf::ListValue) — If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

    If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indeices is populated, returns the argmax index of the outputs.

    Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

#sampled_shapley_attribution

def sampled_shapley_attribution() -> ::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution
Returns
  • (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

#sampled_shapley_attribution=

def sampled_shapley_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution
Parameter
  • value (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
Returns
  • (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

#top_k

def top_k() -> ::Integer
Returns
  • (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.

#top_k=

def top_k=(value) -> ::Integer
Parameter
  • value (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
Returns
  • (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.

#xrai_attribution

def xrai_attribution() -> ::Google::Cloud::AIPlatform::V1::XraiAttribution
Returns
  • (::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

    XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

#xrai_attribution=

def xrai_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::XraiAttribution
Parameter
  • value (::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

    XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

Returns
  • (::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

    XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.