- 3.55.0 (latest)
- 3.54.0
- 3.53.0
- 3.52.0
- 3.50.0
- 3.49.0
- 3.48.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.2
- 3.3.0
- 3.2.0
- 3.0.0
- 2.9.8
- 2.8.9
- 2.7.4
- 2.5.3
- 2.4.0
public static final class Explanation.Builder extends GeneratedMessageV3.Builder<Explanation.Builder> implements ExplanationOrBuilder
Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.
Protobuf type google.cloud.aiplatform.v1beta1.Explanation
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > Explanation.BuilderImplements
ExplanationOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns | |
---|---|
Type | Description |
Descriptor |
Methods
addAllAttributions(Iterable<? extends Attribution> values)
public Explanation.Builder addAllAttributions(Iterable<? extends Attribution> values)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
values |
Iterable<? extends com.google.cloud.aiplatform.v1beta1.Attribution> |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAllNeighbors(Iterable<? extends Neighbor> values)
public Explanation.Builder addAllNeighbors(Iterable<? extends Neighbor> values)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
values |
Iterable<? extends com.google.cloud.aiplatform.v1beta1.Neighbor> |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAttributions(Attribution value)
public Explanation.Builder addAttributions(Attribution value)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
value |
Attribution |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAttributions(Attribution.Builder builderForValue)
public Explanation.Builder addAttributions(Attribution.Builder builderForValue)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
builderForValue |
Attribution.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAttributions(int index, Attribution value)
public Explanation.Builder addAttributions(int index, Attribution value)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
value |
Attribution |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAttributions(int index, Attribution.Builder builderForValue)
public Explanation.Builder addAttributions(int index, Attribution.Builder builderForValue)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
Attribution.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addAttributionsBuilder()
public Attribution.Builder addAttributionsBuilder()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
Attribution.Builder |
addAttributionsBuilder(int index)
public Attribution.Builder addAttributionsBuilder(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Attribution.Builder |
addNeighbors(Neighbor value)
public Explanation.Builder addNeighbors(Neighbor value)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
value |
Neighbor |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addNeighbors(Neighbor.Builder builderForValue)
public Explanation.Builder addNeighbors(Neighbor.Builder builderForValue)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
builderForValue |
Neighbor.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addNeighbors(int index, Neighbor value)
public Explanation.Builder addNeighbors(int index, Neighbor value)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
value |
Neighbor |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addNeighbors(int index, Neighbor.Builder builderForValue)
public Explanation.Builder addNeighbors(int index, Neighbor.Builder builderForValue)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
Neighbor.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
addNeighborsBuilder()
public Neighbor.Builder addNeighborsBuilder()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
Neighbor.Builder |
addNeighborsBuilder(int index)
public Neighbor.Builder addNeighborsBuilder(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Neighbor.Builder |
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public Explanation.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
build()
public Explanation build()
Returns | |
---|---|
Type | Description |
Explanation |
buildPartial()
public Explanation buildPartial()
Returns | |
---|---|
Type | Description |
Explanation |
clear()
public Explanation.Builder clear()
Returns | |
---|---|
Type | Description |
Explanation.Builder |
clearAttributions()
public Explanation.Builder clearAttributions()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
Explanation.Builder |
clearField(Descriptors.FieldDescriptor field)
public Explanation.Builder clearField(Descriptors.FieldDescriptor field)
Parameter | |
---|---|
Name | Description |
field |
FieldDescriptor |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
clearNeighbors()
public Explanation.Builder clearNeighbors()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
Explanation.Builder |
clearOneof(Descriptors.OneofDescriptor oneof)
public Explanation.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter | |
---|---|
Name | Description |
oneof |
OneofDescriptor |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
clone()
public Explanation.Builder clone()
Returns | |
---|---|
Type | Description |
Explanation.Builder |
getAttributions(int index)
public Attribution getAttributions(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Attribution |
getAttributionsBuilder(int index)
public Attribution.Builder getAttributionsBuilder(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Attribution.Builder |
getAttributionsBuilderList()
public List<Attribution.Builder> getAttributionsBuilderList()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<Builder> |
getAttributionsCount()
public int getAttributionsCount()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
int |
getAttributionsList()
public List<Attribution> getAttributionsList()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<Attribution> |
getAttributionsOrBuilder(int index)
public AttributionOrBuilder getAttributionsOrBuilder(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
AttributionOrBuilder |
getAttributionsOrBuilderList()
public List<? extends AttributionOrBuilder> getAttributionsOrBuilderList()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<? extends com.google.cloud.aiplatform.v1beta1.AttributionOrBuilder> |
getDefaultInstanceForType()
public Explanation getDefaultInstanceForType()
Returns | |
---|---|
Type | Description |
Explanation |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns | |
---|---|
Type | Description |
Descriptor |
getNeighbors(int index)
public Neighbor getNeighbors(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Neighbor |
getNeighborsBuilder(int index)
public Neighbor.Builder getNeighborsBuilder(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Neighbor.Builder |
getNeighborsBuilderList()
public List<Neighbor.Builder> getNeighborsBuilderList()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<Builder> |
getNeighborsCount()
public int getNeighborsCount()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
int |
getNeighborsList()
public List<Neighbor> getNeighborsList()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<Neighbor> |
getNeighborsOrBuilder(int index)
public NeighborOrBuilder getNeighborsOrBuilder(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
NeighborOrBuilder |
getNeighborsOrBuilderList()
public List<? extends NeighborOrBuilder> getNeighborsOrBuilderList()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns | |
---|---|
Type | Description |
List<? extends com.google.cloud.aiplatform.v1beta1.NeighborOrBuilder> |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns | |
---|---|
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Returns | |
---|---|
Type | Description |
boolean |
mergeFrom(Explanation other)
public Explanation.Builder mergeFrom(Explanation other)
Parameter | |
---|---|
Name | Description |
other |
Explanation |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public Explanation.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters | |
---|---|
Name | Description |
input |
CodedInputStream |
extensionRegistry |
ExtensionRegistryLite |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
Exceptions | |
---|---|
Type | Description |
IOException |
mergeFrom(Message other)
public Explanation.Builder mergeFrom(Message other)
Parameter | |
---|---|
Name | Description |
other |
Message |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final Explanation.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
removeAttributions(int index)
public Explanation.Builder removeAttributions(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
removeNeighbors(int index)
public Explanation.Builder removeNeighbors(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter | |
---|---|
Name | Description |
index |
int |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setAttributions(int index, Attribution value)
public Explanation.Builder setAttributions(int index, Attribution value)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
value |
Attribution |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setAttributions(int index, Attribution.Builder builderForValue)
public Explanation.Builder setAttributions(int index, Attribution.Builder builderForValue)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1beta1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
Attribution.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setField(Descriptors.FieldDescriptor field, Object value)
public Explanation.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setNeighbors(int index, Neighbor value)
public Explanation.Builder setNeighbors(int index, Neighbor value)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
value |
Neighbor |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setNeighbors(int index, Neighbor.Builder builderForValue)
public Explanation.Builder setNeighbors(int index, Neighbor.Builder builderForValue)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1beta1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameters | |
---|---|
Name | Description |
index |
int |
builderForValue |
Neighbor.Builder |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public Explanation.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
index |
int |
value |
Object |
Returns | |
---|---|
Type | Description |
Explanation.Builder |
setUnknownFields(UnknownFieldSet unknownFields)
public final Explanation.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
Explanation.Builder |