public final class Explanation extends GeneratedMessageV3 implements ExplanationOrBuilder
Explanation of a prediction (provided in
PredictResponse.predictions)
produced by the Model on a given
instance.
Protobuf type google.cloud.aiplatform.v1.Explanation
Inherited Members
com.google.protobuf.GeneratedMessageV3.<ListT>makeMutableCopy(ListT)
com.google.protobuf.GeneratedMessageV3.<ListT>makeMutableCopy(ListT,int)
com.google.protobuf.GeneratedMessageV3.<T>emptyList(java.lang.Class<T>)
com.google.protobuf.GeneratedMessageV3.internalGetMapFieldReflection(int)
Static Fields
public static final int ATTRIBUTIONS_FIELD_NUMBER
Field Value |
Type |
Description |
int |
|
public static final int NEIGHBORS_FIELD_NUMBER
Field Value |
Type |
Description |
int |
|
Static Methods
public static Explanation getDefaultInstance()
public static final Descriptors.Descriptor getDescriptor()
public static Explanation.Builder newBuilder()
public static Explanation.Builder newBuilder(Explanation prototype)
public static Explanation parseDelimitedFrom(InputStream input)
public static Explanation parseDelimitedFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
public static Explanation parseFrom(byte[] data)
Parameter |
Name |
Description |
data |
byte[]
|
public static Explanation parseFrom(byte[] data, ExtensionRegistryLite extensionRegistry)
public static Explanation parseFrom(ByteString data)
public static Explanation parseFrom(ByteString data, ExtensionRegistryLite extensionRegistry)
public static Explanation parseFrom(CodedInputStream input)
public static Explanation parseFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public static Explanation parseFrom(InputStream input)
public static Explanation parseFrom(InputStream input, ExtensionRegistryLite extensionRegistry)
public static Explanation parseFrom(ByteBuffer data)
public static Explanation parseFrom(ByteBuffer data, ExtensionRegistryLite extensionRegistry)
public static Parser<Explanation> parser()
Methods
public boolean equals(Object obj)
Parameter |
Name |
Description |
obj |
Object
|
Overrides
public Attribution getAttributions(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that
predict only one score, there is only one attibution that explains the
predicted output. For Models that predict multiple outputs, such as
multiclass Models that predict multiple classes, each element explains one
specific item.
Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set
ExplanationParameters.top_k,
the attributions are sorted by
instance_output_value
in descending order. If
ExplanationParameters.output_indices
is specified, the attributions are stored by
Attribution.output_index
in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter |
Name |
Description |
index |
int
|
public int getAttributionsCount()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that
predict only one score, there is only one attibution that explains the
predicted output. For Models that predict multiple outputs, such as
multiclass Models that predict multiple classes, each element explains one
specific item.
Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set
ExplanationParameters.top_k,
the attributions are sorted by
instance_output_value
in descending order. If
ExplanationParameters.output_indices
is specified, the attributions are stored by
Attribution.output_index
in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns |
Type |
Description |
int |
|
public List<Attribution> getAttributionsList()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that
predict only one score, there is only one attibution that explains the
predicted output. For Models that predict multiple outputs, such as
multiclass Models that predict multiple classes, each element explains one
specific item.
Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set
ExplanationParameters.top_k,
the attributions are sorted by
instance_output_value
in descending order. If
ExplanationParameters.output_indices
is specified, the attributions are stored by
Attribution.output_index
in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
public AttributionOrBuilder getAttributionsOrBuilder(int index)
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that
predict only one score, there is only one attibution that explains the
predicted output. For Models that predict multiple outputs, such as
multiclass Models that predict multiple classes, each element explains one
specific item.
Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set
ExplanationParameters.top_k,
the attributions are sorted by
instance_output_value
in descending order. If
ExplanationParameters.output_indices
is specified, the attributions are stored by
Attribution.output_index
in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter |
Name |
Description |
index |
int
|
public List<? extends AttributionOrBuilder> getAttributionsOrBuilderList()
Output only. Feature attributions grouped by predicted outputs.
For Models that predict only one output, such as regression Models that
predict only one score, there is only one attibution that explains the
predicted output. For Models that predict multiple outputs, such as
multiclass Models that predict multiple classes, each element explains one
specific item.
Attribution.output_index
can be used to identify which output this attribution is explaining.
By default, we provide Shapley values for the predicted class. However,
you can configure the explanation request to generate Shapley values for
any other classes too. For example, if a model predicts a probability of
0.4
for approving a loan application, the model's decision is to reject
the application since p(reject) = 0.6 > p(approve) = 0.4
, and the default
Shapley values would be computed for rejection decision and not approval,
even though the latter might be the positive class.
If users set
ExplanationParameters.top_k,
the attributions are sorted by
instance_output_value
in descending order. If
ExplanationParameters.output_indices
is specified, the attributions are stored by
Attribution.output_index
in the same order as they appear in the output_indices.
repeated .google.cloud.aiplatform.v1.Attribution attributions = 1 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns |
Type |
Description |
List<? extends com.google.cloud.aiplatform.v1.AttributionOrBuilder> |
|
public Explanation getDefaultInstanceForType()
public Neighbor getNeighbors(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the
attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter |
Name |
Description |
index |
int
|
public int getNeighborsCount()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the
attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns |
Type |
Description |
int |
|
public List<Neighbor> getNeighborsList()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the
attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
public NeighborOrBuilder getNeighborsOrBuilder(int index)
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the
attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Parameter |
Name |
Description |
index |
int
|
public List<? extends NeighborOrBuilder> getNeighborsOrBuilderList()
Output only. List of the nearest neighbors for example-based explanations.
For models deployed with the examples explanations feature enabled, the
attributions field is empty and instead the neighbors field is populated.
repeated .google.cloud.aiplatform.v1.Neighbor neighbors = 2 [(.google.api.field_behavior) = OUTPUT_ONLY];
Returns |
Type |
Description |
List<? extends com.google.cloud.aiplatform.v1.NeighborOrBuilder> |
|
public Parser<Explanation> getParserForType()
Overrides
public int getSerializedSize()
Returns |
Type |
Description |
int |
|
Overrides
Returns |
Type |
Description |
int |
|
Overrides
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Overrides
public final boolean isInitialized()
Overrides
public Explanation.Builder newBuilderForType()
protected Explanation.Builder newBuilderForType(GeneratedMessageV3.BuilderParent parent)
Overrides
protected Object newInstance(GeneratedMessageV3.UnusedPrivateParameter unused)
Returns |
Type |
Description |
Object |
|
Overrides
public Explanation.Builder toBuilder()
public void writeTo(CodedOutputStream output)
Overrides