Google Cloud Ai Platform V1 Client - Class Explanation (0.19.0)

Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class Explanation.

Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.

Generated from protobuf message google.cloud.aiplatform.v1.Explanation

Namespace

Google \ Cloud \ AIPlatform \ V1

Methods

__construct

Constructor.

Parameters
NameDescription
data array

Optional. Data for populating the Message object.

↳ attributions array<Google\Cloud\AIPlatform\V1\Attribution>

Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

↳ neighbors array<Google\Cloud\AIPlatform\V1\Neighbor>

Output only. List of the nearest neighbors for example-based explanations. For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

getAttributions

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setAttributions

Output only. Feature attributions grouped by predicted outputs.

For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.

Parameter
NameDescription
var array<Google\Cloud\AIPlatform\V1\Attribution>
Returns
TypeDescription
$this

getNeighbors

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

Returns
TypeDescription
Google\Protobuf\Internal\RepeatedField

setNeighbors

Output only. List of the nearest neighbors for example-based explanations.

For models deployed with the examples explanations feature enabled, the attributions field is empty and instead the neighbors field is populated.

Parameter
NameDescription
var array<Google\Cloud\AIPlatform\V1\Neighbor>
Returns
TypeDescription
$this