Interface ExplanationOrBuilder (3.54.0)

public interface ExplanationOrBuilder extends MessageOrBuilder

Implements

MessageOrBuilder

Methods

getAttributions(int index)

public abstract Attribution getAttributions(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Attribution

getAttributionsCount()

public abstract int getAttributionsCount()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
int

getAttributionsList()

public abstract List<Attribution> getAttributionsList()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Attribution>

getAttributionsOrBuilder(int index)

public abstract AttributionOrBuilder getAttributionsOrBuilder(int index)

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
AttributionOrBuilder

getAttributionsOrBuilderList()

public abstract List<? extends AttributionOrBuilder> getAttributionsOrBuilderList()

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.

By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of 0.4 for approving a loan application, the model's decision is to reject the application since p(reject) = 0.6 > p(approve) = 0.4, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class.

If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<? extends com.google.cloud.aiplatform.v1beta1.AttributionOrBuilder>

getNeighbors(int index)

public abstract Neighbor getNeighbors(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
Neighbor

getNeighborsCount()

public abstract int getNeighborsCount()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
int

getNeighborsList()

public abstract List<Neighbor> getNeighborsList()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<Neighbor>

getNeighborsOrBuilder(int index)

public abstract NeighborOrBuilder getNeighborsOrBuilder(int index)

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Parameter
Name Description
index int
Returns
Type Description
NeighborOrBuilder

getNeighborsOrBuilderList()

public abstract List<? extends NeighborOrBuilder> getNeighborsOrBuilderList()

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];

Returns
Type Description
List<? extends com.google.cloud.aiplatform.v1beta1.NeighborOrBuilder>