Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class ExplainRequest.
Request message for PredictionService.Explain.
Generated from protobuf message google.cloud.aiplatform.v1.ExplainRequest
Methods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
data |
array
Optional. Data for populating the Message object. |
↳ endpoint |
string
Required. The name of the Endpoint requested to serve the explanation. Format: |
↳ instances |
array<Google\Protobuf\Value>
Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri. |
↳ parameters |
Google\Protobuf\Value
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri. |
↳ explanation_spec_override |
Google\Cloud\AIPlatform\V1\ExplanationSpecOverride
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as: - Explaining top-5 predictions results as opposed to top-1; - Increasing path count or step count of the attribution methods to reduce approximate errors; - Using different baselines for explaining the prediction results. |
↳ deployed_model_id |
string
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split. |
getEndpoint
Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
Generated from protobuf field string endpoint = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = {
Returns | |
---|---|
Type | Description |
string |
setEndpoint
Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
Generated from protobuf field string endpoint = 1 [(.google.api.field_behavior) = REQUIRED, (.google.api.resource_reference) = {
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getInstances
Required. The instances that are the input to the explanation call.
A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
Generated from protobuf field repeated .google.protobuf.Value instances = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
Google\Protobuf\Internal\RepeatedField |
setInstances
Required. The instances that are the input to the explanation call.
A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.
Generated from protobuf field repeated .google.protobuf.Value instances = 2 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
var |
array<Google\Protobuf\Value>
|
Returns | |
---|---|
Type | Description |
$this |
getParameters
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.
Generated from protobuf field .google.protobuf.Value parameters = 4;
Returns | |
---|---|
Type | Description |
Google\Protobuf\Value|null |
hasParameters
clearParameters
setParameters
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.
Generated from protobuf field .google.protobuf.Value parameters = 4;
Parameter | |
---|---|
Name | Description |
var |
Google\Protobuf\Value
|
Returns | |
---|---|
Type | Description |
$this |
getExplanationSpecOverride
If specified, overrides the explanation_spec of the DeployedModel.
Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
Generated from protobuf field .google.cloud.aiplatform.v1.ExplanationSpecOverride explanation_spec_override = 5;
Returns | |
---|---|
Type | Description |
Google\Cloud\AIPlatform\V1\ExplanationSpecOverride|null |
hasExplanationSpecOverride
clearExplanationSpecOverride
setExplanationSpecOverride
If specified, overrides the explanation_spec of the DeployedModel.
Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
Generated from protobuf field .google.cloud.aiplatform.v1.ExplanationSpecOverride explanation_spec_override = 5;
Parameter | |
---|---|
Name | Description |
var |
Google\Cloud\AIPlatform\V1\ExplanationSpecOverride
|
Returns | |
---|---|
Type | Description |
$this |
getDeployedModelId
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
Generated from protobuf field string deployed_model_id = 3;
Returns | |
---|---|
Type | Description |
string |
setDeployedModelId
If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
Generated from protobuf field string deployed_model_id = 3;
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |