- 3.55.0 (latest)
- 3.54.0
- 3.53.0
- 3.52.0
- 3.50.0
- 3.49.0
- 3.48.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.2
- 3.3.0
- 3.2.0
- 3.0.0
- 2.9.8
- 2.8.9
- 2.7.4
- 2.5.3
- 2.4.0
public static final class ExplanationParameters.Builder extends GeneratedMessageV3.Builder<ExplanationParameters.Builder> implements ExplanationParametersOrBuilder
Parameters to configure explaining for Model's predictions.
Protobuf type google.cloud.aiplatform.v1.ExplanationParameters
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > ExplanationParameters.BuilderImplements
ExplanationParametersOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns | |
---|---|
Type | Description |
Descriptor |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public ExplanationParameters.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field | FieldDescriptor |
value | Object |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
build()
public ExplanationParameters build()
Returns | |
---|---|
Type | Description |
ExplanationParameters |
buildPartial()
public ExplanationParameters buildPartial()
Returns | |
---|---|
Type | Description |
ExplanationParameters |
clear()
public ExplanationParameters.Builder clear()
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearField(Descriptors.FieldDescriptor field)
public ExplanationParameters.Builder clearField(Descriptors.FieldDescriptor field)
Parameter | |
---|---|
Name | Description |
field | FieldDescriptor |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearIntegratedGradientsAttribution()
public ExplanationParameters.Builder clearIntegratedGradientsAttribution()
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearMethod()
public ExplanationParameters.Builder clearMethod()
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearOneof(Descriptors.OneofDescriptor oneof)
public ExplanationParameters.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter | |
---|---|
Name | Description |
oneof | OneofDescriptor |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearOutputIndices()
public ExplanationParameters.Builder clearOutputIndices()
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearSampledShapleyAttribution()
public ExplanationParameters.Builder clearSampledShapleyAttribution()
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clearTopK()
public ExplanationParameters.Builder clearTopK()
If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
int32 top_k = 4;
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder | This builder for chaining. |
clearXraiAttribution()
public ExplanationParameters.Builder clearXraiAttribution()
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
clone()
public ExplanationParameters.Builder clone()
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
getDefaultInstanceForType()
public ExplanationParameters getDefaultInstanceForType()
Returns | |
---|---|
Type | Description |
ExplanationParameters |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns | |
---|---|
Type | Description |
Descriptor |
getIntegratedGradientsAttribution()
public IntegratedGradientsAttribution getIntegratedGradientsAttribution()
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Returns | |
---|---|
Type | Description |
IntegratedGradientsAttribution | The integratedGradientsAttribution. |
getIntegratedGradientsAttributionBuilder()
public IntegratedGradientsAttribution.Builder getIntegratedGradientsAttributionBuilder()
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Returns | |
---|---|
Type | Description |
IntegratedGradientsAttribution.Builder |
getIntegratedGradientsAttributionOrBuilder()
public IntegratedGradientsAttributionOrBuilder getIntegratedGradientsAttributionOrBuilder()
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Returns | |
---|---|
Type | Description |
IntegratedGradientsAttributionOrBuilder |
getMethodCase()
public ExplanationParameters.MethodCase getMethodCase()
Returns | |
---|---|
Type | Description |
ExplanationParameters.MethodCase |
getOutputIndices()
public ListValue getOutputIndices()
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Returns | |
---|---|
Type | Description |
ListValue | The outputIndices. |
getOutputIndicesBuilder()
public ListValue.Builder getOutputIndicesBuilder()
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Returns | |
---|---|
Type | Description |
Builder |
getOutputIndicesOrBuilder()
public ListValueOrBuilder getOutputIndicesOrBuilder()
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Returns | |
---|---|
Type | Description |
ListValueOrBuilder |
getSampledShapleyAttribution()
public SampledShapleyAttribution getSampledShapleyAttribution()
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Returns | |
---|---|
Type | Description |
SampledShapleyAttribution | The sampledShapleyAttribution. |
getSampledShapleyAttributionBuilder()
public SampledShapleyAttribution.Builder getSampledShapleyAttributionBuilder()
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Returns | |
---|---|
Type | Description |
SampledShapleyAttribution.Builder |
getSampledShapleyAttributionOrBuilder()
public SampledShapleyAttributionOrBuilder getSampledShapleyAttributionOrBuilder()
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Returns | |
---|---|
Type | Description |
SampledShapleyAttributionOrBuilder |
getTopK()
public int getTopK()
If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
int32 top_k = 4;
Returns | |
---|---|
Type | Description |
int | The topK. |
getXraiAttribution()
public XraiAttribution getXraiAttribution()
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Returns | |
---|---|
Type | Description |
XraiAttribution | The xraiAttribution. |
getXraiAttributionBuilder()
public XraiAttribution.Builder getXraiAttributionBuilder()
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Returns | |
---|---|
Type | Description |
XraiAttribution.Builder |
getXraiAttributionOrBuilder()
public XraiAttributionOrBuilder getXraiAttributionOrBuilder()
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Returns | |
---|---|
Type | Description |
XraiAttributionOrBuilder |
hasIntegratedGradientsAttribution()
public boolean hasIntegratedGradientsAttribution()
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Returns | |
---|---|
Type | Description |
boolean | Whether the integratedGradientsAttribution field is set. |
hasOutputIndices()
public boolean hasOutputIndices()
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Returns | |
---|---|
Type | Description |
boolean | Whether the outputIndices field is set. |
hasSampledShapleyAttribution()
public boolean hasSampledShapleyAttribution()
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Returns | |
---|---|
Type | Description |
boolean | Whether the sampledShapleyAttribution field is set. |
hasXraiAttribution()
public boolean hasXraiAttribution()
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Returns | |
---|---|
Type | Description |
boolean | Whether the xraiAttribution field is set. |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns | |
---|---|
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Returns | |
---|---|
Type | Description |
boolean |
mergeFrom(ExplanationParameters other)
public ExplanationParameters.Builder mergeFrom(ExplanationParameters other)
Parameter | |
---|---|
Name | Description |
other | ExplanationParameters |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public ExplanationParameters.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters | |
---|---|
Name | Description |
input | CodedInputStream |
extensionRegistry | ExtensionRegistryLite |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
Exceptions | |
---|---|
Type | Description |
IOException |
mergeFrom(Message other)
public ExplanationParameters.Builder mergeFrom(Message other)
Parameter | |
---|---|
Name | Description |
other | Message |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeIntegratedGradientsAttribution(IntegratedGradientsAttribution value)
public ExplanationParameters.Builder mergeIntegratedGradientsAttribution(IntegratedGradientsAttribution value)
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Parameter | |
---|---|
Name | Description |
value | IntegratedGradientsAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeOutputIndices(ListValue value)
public ExplanationParameters.Builder mergeOutputIndices(ListValue value)
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Parameter | |
---|---|
Name | Description |
value | ListValue |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeSampledShapleyAttribution(SampledShapleyAttribution value)
public ExplanationParameters.Builder mergeSampledShapleyAttribution(SampledShapleyAttribution value)
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Parameter | |
---|---|
Name | Description |
value | SampledShapleyAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final ExplanationParameters.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields | UnknownFieldSet |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
mergeXraiAttribution(XraiAttribution value)
public ExplanationParameters.Builder mergeXraiAttribution(XraiAttribution value)
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Parameter | |
---|---|
Name | Description |
value | XraiAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setField(Descriptors.FieldDescriptor field, Object value)
public ExplanationParameters.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field | FieldDescriptor |
value | Object |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setIntegratedGradientsAttribution(IntegratedGradientsAttribution value)
public ExplanationParameters.Builder setIntegratedGradientsAttribution(IntegratedGradientsAttribution value)
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Parameter | |
---|---|
Name | Description |
value | IntegratedGradientsAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setIntegratedGradientsAttribution(IntegratedGradientsAttribution.Builder builderForValue)
public ExplanationParameters.Builder setIntegratedGradientsAttribution(IntegratedGradientsAttribution.Builder builderForValue)
An attribution method that computes Aumann-Shapley values taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
.google.cloud.aiplatform.v1.IntegratedGradientsAttribution integrated_gradients_attribution = 2;
Parameter | |
---|---|
Name | Description |
builderForValue | IntegratedGradientsAttribution.Builder |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setOutputIndices(ListValue value)
public ExplanationParameters.Builder setOutputIndices(ListValue value)
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Parameter | |
---|---|
Name | Description |
value | ListValue |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setOutputIndices(ListValue.Builder builderForValue)
public ExplanationParameters.Builder setOutputIndices(ListValue.Builder builderForValue)
If populated, only returns attributions that have output_index contained in output_indices. It must be an ndarray of integers, with the same shape of the output it's explaining. If not populated, returns attributions for top_k indices of outputs. If neither top_k nor output_indices is populated, returns the argmax index of the outputs. Only applicable to Models that predict multiple outputs (e,g, multi-class Models that predict multiple classes).
.google.protobuf.ListValue output_indices = 5;
Parameter | |
---|---|
Name | Description |
builderForValue | Builder |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public ExplanationParameters.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters | |
---|---|
Name | Description |
field | FieldDescriptor |
index | int |
value | Object |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setSampledShapleyAttribution(SampledShapleyAttribution value)
public ExplanationParameters.Builder setSampledShapleyAttribution(SampledShapleyAttribution value)
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Parameter | |
---|---|
Name | Description |
value | SampledShapleyAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setSampledShapleyAttribution(SampledShapleyAttribution.Builder builderForValue)
public ExplanationParameters.Builder setSampledShapleyAttribution(SampledShapleyAttribution.Builder builderForValue)
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
.google.cloud.aiplatform.v1.SampledShapleyAttribution sampled_shapley_attribution = 1;
Parameter | |
---|---|
Name | Description |
builderForValue | SampledShapleyAttribution.Builder |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setTopK(int value)
public ExplanationParameters.Builder setTopK(int value)
If populated, returns attributions for top K indices of outputs (defaults to 1). Only applies to Models that predicts more than one outputs (e,g, multi-class Models). When set to -1, returns explanations for all outputs.
int32 top_k = 4;
Parameter | |
---|---|
Name | Description |
value | int The topK to set. |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder | This builder for chaining. |
setUnknownFields(UnknownFieldSet unknownFields)
public final ExplanationParameters.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields | UnknownFieldSet |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setXraiAttribution(XraiAttribution value)
public ExplanationParameters.Builder setXraiAttribution(XraiAttribution value)
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Parameter | |
---|---|
Name | Description |
value | XraiAttribution |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |
setXraiAttribution(XraiAttribution.Builder builderForValue)
public ExplanationParameters.Builder setXraiAttribution(XraiAttribution.Builder builderForValue)
An attribution method that redistributes Integrated Gradients attribution to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 XRAI currently performs better on natural images, like a picture of a house or an animal. If the images are taken in artificial environments, like a lab or manufacturing line, or from diagnostic equipment, like x-rays or quality-control cameras, use Integrated Gradients instead.
.google.cloud.aiplatform.v1.XraiAttribution xrai_attribution = 3;
Parameter | |
---|---|
Name | Description |
builderForValue | XraiAttribution.Builder |
Returns | |
---|---|
Type | Description |
ExplanationParameters.Builder |