Class Explanation.Builder (3.43.0)

public static final class Explanation.Builder extends GeneratedMessageV3.Builder<Explanation.Builder> implements ExplanationOrBuilder

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.

Protobuf type google.cloud.aiplatform.v1beta1.Explanation

Static Methods

getDescriptor()

public static final Descriptors.Descriptor getDescriptor()
Returns
Type Description
Descriptor

Methods

addAllAttributions(Iterable<? extends Attribution> values)

public Explanation.Builder addAllAttributions(Iterable<? extends Attribution> values)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
values Iterable<? extends com.google.cloud.aiplatform.v1beta1.Attribution>
Returns
Type Description
Explanation.Builder

addAllNeighbors(Iterable<? extends Neighbor> values)

public Explanation.Builder addAllNeighbors(Iterable<? extends Neighbor> values)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
values Iterable<? extends com.google.cloud.aiplatform.v1beta1.Neighbor>
Returns
Type Description
Explanation.Builder

addAttributions(Attribution value)

public Explanation.Builder addAttributions(Attribution value)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
value Attribution
Returns
Type Description
Explanation.Builder

addAttributions(Attribution.Builder builderForValue)

public Explanation.Builder addAttributions(Attribution.Builder builderForValue)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
builderForValue Attribution.Builder
Returns
Type Description
Explanation.Builder

addAttributions(int index, Attribution value)

public Explanation.Builder addAttributions(int index, Attribution value)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
value Attribution
Returns
Type Description
Explanation.Builder

addAttributions(int index, Attribution.Builder builderForValue)

public Explanation.Builder addAttributions(int index, Attribution.Builder builderForValue)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
builderForValue Attribution.Builder
Returns
Type Description
Explanation.Builder

addAttributionsBuilder()

public Attribution.Builder addAttributionsBuilder()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
Attribution.Builder

addAttributionsBuilder(int index)

public Attribution.Builder addAttributionsBuilder(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Attribution.Builder

addNeighbors(Neighbor value)

public Explanation.Builder addNeighbors(Neighbor value)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
value Neighbor
Returns
Type Description
Explanation.Builder

addNeighbors(Neighbor.Builder builderForValue)

public Explanation.Builder addNeighbors(Neighbor.Builder builderForValue)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
builderForValue Neighbor.Builder
Returns
Type Description
Explanation.Builder

addNeighbors(int index, Neighbor value)

public Explanation.Builder addNeighbors(int index, Neighbor value)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
value Neighbor
Returns
Type Description
Explanation.Builder

addNeighbors(int index, Neighbor.Builder builderForValue)

public Explanation.Builder addNeighbors(int index, Neighbor.Builder builderForValue)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
builderForValue Neighbor.Builder
Returns
Type Description
Explanation.Builder

addNeighborsBuilder()

public Neighbor.Builder addNeighborsBuilder()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
Neighbor.Builder

addNeighborsBuilder(int index)

public Neighbor.Builder addNeighborsBuilder(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Neighbor.Builder

addRepeatedField(Descriptors.FieldDescriptor field, Object value)

public Explanation.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
Explanation.Builder
Overrides

build()

public Explanation build()
Returns
Type Description
Explanation

buildPartial()

public Explanation buildPartial()
Returns
Type Description
Explanation

clear()

public Explanation.Builder clear()
Returns
Type Description
Explanation.Builder
Overrides

clearAttributions()

public Explanation.Builder clearAttributions()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
Explanation.Builder

clearField(Descriptors.FieldDescriptor field)

public Explanation.Builder clearField(Descriptors.FieldDescriptor field)
Parameter
Name Description
field FieldDescriptor
Returns
Type Description
Explanation.Builder
Overrides

clearNeighbors()

public Explanation.Builder clearNeighbors()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
Explanation.Builder

clearOneof(Descriptors.OneofDescriptor oneof)

public Explanation.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter
Name Description
oneof OneofDescriptor
Returns
Type Description
Explanation.Builder
Overrides

clone()

public Explanation.Builder clone()
Returns
Type Description
Explanation.Builder
Overrides

getAttributions(int index)

public Attribution getAttributions(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Attribution

getAttributionsBuilder(int index)

public Attribution.Builder getAttributionsBuilder(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Attribution.Builder

getAttributionsBuilderList()

public List<Attribution.Builder> getAttributionsBuilderList()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Builder>

getAttributionsCount()

public int getAttributionsCount()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
int

getAttributionsList()

public List<Attribution> getAttributionsList()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Attribution>

getAttributionsOrBuilder(int index)

public AttributionOrBuilder getAttributionsOrBuilder(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
AttributionOrBuilder

getAttributionsOrBuilderList()

public List<? extends AttributionOrBuilder> getAttributionsOrBuilderList()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<? extends com.google.cloud.aiplatform.v1beta1.AttributionOrBuilder>

getDefaultInstanceForType()

public Explanation getDefaultInstanceForType()
Returns
Type Description
Explanation

getDescriptorForType()

public Descriptors.Descriptor getDescriptorForType()
Returns
Type Description
Descriptor
Overrides

getNeighbors(int index)

public Neighbor getNeighbors(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Neighbor

getNeighborsBuilder(int index)

public Neighbor.Builder getNeighborsBuilder(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Neighbor.Builder

getNeighborsBuilderList()

public List<Neighbor.Builder> getNeighborsBuilderList()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Builder>

getNeighborsCount()

public int getNeighborsCount()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
int

getNeighborsList()

public List<Neighbor> getNeighborsList()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Neighbor>

getNeighborsOrBuilder(int index)

public NeighborOrBuilder getNeighborsOrBuilder(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
NeighborOrBuilder

getNeighborsOrBuilderList()

public List<? extends NeighborOrBuilder> getNeighborsOrBuilderList()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<? extends com.google.cloud.aiplatform.v1beta1.NeighborOrBuilder>

internalGetFieldAccessorTable()

protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns
Type Description
FieldAccessorTable
Overrides

isInitialized()

public final boolean isInitialized()
Returns
Type Description
boolean
Overrides

mergeFrom(Explanation other)

public Explanation.Builder mergeFrom(Explanation other)
Parameter
Name Description
other Explanation
Returns
Type Description
Explanation.Builder

mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)

public Explanation.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters
Name Description
input CodedInputStream
extensionRegistry ExtensionRegistryLite
Returns
Type Description
Explanation.Builder
Overrides
Exceptions
Type Description
IOException

mergeFrom(Message other)

public Explanation.Builder mergeFrom(Message other)
Parameter
Name Description
other Message
Returns
Type Description
Explanation.Builder
Overrides

mergeUnknownFields(UnknownFieldSet unknownFields)

public final Explanation.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
Explanation.Builder
Overrides

removeAttributions(int index)

public Explanation.Builder removeAttributions(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Explanation.Builder

removeNeighbors(int index)

public Explanation.Builder removeNeighbors(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Explanation.Builder

setAttributions(int index, Attribution value)

public Explanation.Builder setAttributions(int index, Attribution value)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
value Attribution
Returns
Type Description
Explanation.Builder

setAttributions(int index, Attribution.Builder builderForValue)

public Explanation.Builder setAttributions(int index, Attribution.Builder builderForValue)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
builderForValue Attribution.Builder
Returns
Type Description
Explanation.Builder

setField(Descriptors.FieldDescriptor field, Object value)

public Explanation.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters
Name Description
field FieldDescriptor
value Object
Returns
Type Description
Explanation.Builder
Overrides

setNeighbors(int index, Neighbor value)

public Explanation.Builder setNeighbors(int index, Neighbor value)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
value Neighbor
Returns
Type Description
Explanation.Builder

setNeighbors(int index, Neighbor.Builder builderForValue)

public Explanation.Builder setNeighbors(int index, Neighbor.Builder builderForValue)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameters
Name Description
index int
builderForValue Neighbor.Builder
Returns
Type Description
Explanation.Builder

setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)

public Explanation.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters
Name Description
field FieldDescriptor
index int
value Object
Returns
Type Description
Explanation.Builder
Overrides

setUnknownFields(UnknownFieldSet unknownFields)

public final Explanation.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter
Name Description
unknownFields UnknownFieldSet
Returns
Type Description
Explanation.Builder
Overrides