- 0.61.0 (latest)
- 0.60.0
- 0.59.0
- 0.58.0
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ExplanationParameters.
Parameters to configure explaining for Model's predictions.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#integrated_gradients_attribution
def integrated_gradients_attribution() -> ::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution
- (::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution) — An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
#integrated_gradients_attribution=
def integrated_gradients_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution
- value (::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution) — An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- (::Google::Cloud::AIPlatform::V1::IntegratedGradientsAttribution) — An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
#output_indices
def output_indices() -> ::Google::Protobuf::ListValue
-
(::Google::Protobuf::ListValue) — If populated, only returns attributions that have
output_index
contained in output_indices. It must be an ndarray of integers, with the
same shape of the output it's explaining.
If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.
Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
#output_indices=
def output_indices=(value) -> ::Google::Protobuf::ListValue
-
value (::Google::Protobuf::ListValue) — If populated, only returns attributions that have
output_index
contained in output_indices. It must be an ndarray of integers, with the
same shape of the output it's explaining.
If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.
Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
-
(::Google::Protobuf::ListValue) — If populated, only returns attributions that have
output_index
contained in output_indices. It must be an ndarray of integers, with the
same shape of the output it's explaining.
If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.
Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
#sampled_shapley_attribution
def sampled_shapley_attribution() -> ::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution
- (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
#sampled_shapley_attribution=
def sampled_shapley_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution
- value (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
- (::Google::Cloud::AIPlatform::V1::SampledShapleyAttribution) — An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
#top_k
def top_k() -> ::Integer
- (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
#top_k=
def top_k=(value) -> ::Integer
- value (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
- (::Integer) — If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
#xrai_attribution
def xrai_attribution() -> ::Google::Cloud::AIPlatform::V1::XraiAttribution
-
(::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients
attribution to segmented regions, taking advantage of the model's fully
differentiable structure. Refer to this paper for
more details: https://arxiv.org/abs/1906.02825
XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
#xrai_attribution=
def xrai_attribution=(value) -> ::Google::Cloud::AIPlatform::V1::XraiAttribution
-
value (::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients
attribution to segmented regions, taking advantage of the model's fully
differentiable structure. Refer to this paper for
more details: https://arxiv.org/abs/1906.02825
XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
-
(::Google::Cloud::AIPlatform::V1::XraiAttribution) — An attribution method that redistributes Integrated Gradients
attribution to segmented regions, taking advantage of the model's fully
differentiable structure. Refer to this paper for
more details: https://arxiv.org/abs/1906.02825
XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.