Interface ExplanationParametersOrBuilder (3.41.0)

public interface ExplanationParametersOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getExamples()

public abstract Examples getExamples()

Example-based explanations that returns the nearest neighbors from the provided dataset.

.google.cloud.aiplatform.v1.Examples examples = 7;

Returns
TypeDescription
Examples

The examples.

getExamplesOrBuilder()

public abstract ExamplesOrBuilder getExamplesOrBuilder()

Example-based explanations that returns the nearest neighbors from the provided dataset.

.google.cloud.aiplatform.v1.Examples examples = 7;

Returns
TypeDescription
ExamplesOrBuilder

getIntegratedGradientsAttribution()

public abstract IntegratedGradientsAttribution getIntegratedGradientsAttribution()

An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;

Returns
TypeDescription
IntegratedGradientsAttribution

The integratedGradientsAttribution.

getIntegratedGradientsAttributionOrBuilder()

public abstract IntegratedGradientsAttributionOrBuilder getIntegratedGradientsAttributionOrBuilder()

An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;

Returns
TypeDescription
IntegratedGradientsAttributionOrBuilder

getMethodCase()

public abstract ExplanationParameters.MethodCase getMethodCase()
Returns
TypeDescription
ExplanationParameters.MethodCase

getOutputIndices()

public abstract ListValue getOutputIndices()

If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.

Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

.google.protobuf.ListValue output_indices = 5;

Returns
TypeDescription
ListValue

The outputIndices.

getOutputIndicesOrBuilder()

public abstract ListValueOrBuilder getOutputIndicesOrBuilder()

If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.

Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

.google.protobuf.ListValue output_indices = 5;

Returns
TypeDescription
ListValueOrBuilder

getSampledShapleyAttribution()

public abstract SampledShapleyAttribution getSampledShapleyAttribution()

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;

Returns
TypeDescription
SampledShapleyAttribution

The sampledShapleyAttribution.

getSampledShapleyAttributionOrBuilder()

public abstract SampledShapleyAttributionOrBuilder getSampledShapleyAttributionOrBuilder()

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;

Returns
TypeDescription
SampledShapleyAttributionOrBuilder

getTopK()

public abstract int getTopK()

If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.

int32 top_k = 4;

Returns
TypeDescription
int

The topK.

getXraiAttribution()

public abstract XraiAttribution getXraiAttribution()

An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;

Returns
TypeDescription
XraiAttribution

The xraiAttribution.

getXraiAttributionOrBuilder()

public abstract XraiAttributionOrBuilder getXraiAttributionOrBuilder()

An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;

Returns
TypeDescription
XraiAttributionOrBuilder

hasExamples()

public abstract boolean hasExamples()

Example-based explanations that returns the nearest neighbors from the provided dataset.

.google.cloud.aiplatform.v1.Examples examples = 7;

Returns
TypeDescription
boolean

Whether the examples field is set.

hasIntegratedGradientsAttribution()

public abstract boolean hasIntegratedGradientsAttribution()

An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;

Returns
TypeDescription
boolean

Whether the integratedGradientsAttribution field is set.

hasOutputIndices()

public abstract boolean hasOutputIndices()

If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining.

If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs.

Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).

.google.protobuf.ListValue output_indices = 5;

Returns
TypeDescription
boolean

Whether the outputIndices field is set.

hasSampledShapleyAttribution()

public abstract boolean hasSampledShapleyAttribution()

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.

.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;

Returns
TypeDescription
boolean

Whether the sampledShapleyAttribution field is set.

hasXraiAttribution()

public abstract boolean hasXraiAttribution()

An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825

XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.

.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;

Returns
TypeDescription
boolean

Whether the xraiAttribution field is set.