- 0.53.0 (latest)
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ExplainRequest.
Request message for PredictionService.Explain.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#deployed_model_id
def deployed_model_id() -> ::String
- (::String) — If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
#deployed_model_id=
def deployed_model_id=(value) -> ::String
- value (::String) — If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
- (::String) — If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.
#endpoint
def endpoint() -> ::String
-
(::String) — Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
#endpoint=
def endpoint=(value) -> ::String
-
value (::String) — Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
-
(::String) — Required. The name of the Endpoint requested to serve the explanation.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
#explanation_spec_override
def explanation_spec_override() -> ::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride
-
(::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride) —
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
#explanation_spec_override=
def explanation_spec_override=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride
-
value (::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride) —
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
-
(::Google::Cloud::AIPlatform::V1::ExplanationSpecOverride) —
If specified, overrides the explanation_spec of the DeployedModel. Can be used for explaining prediction results with different configurations, such as:
- Explaining top-5 predictions results as opposed to top-1;
- Increasing path count or step count of the attribution methods to reduce approximate errors;
- Using different baselines for explaining the prediction results.
#instances
def instances() -> ::Array<::Google::Protobuf::Value>
- (::Array<::Google::Protobuf::Value>) — Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
#instances=
def instances=(value) -> ::Array<::Google::Protobuf::Value>
- value (::Array<::Google::Protobuf::Value>) — Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
- (::Array<::Google::Protobuf::Value>) — Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] instance_schema_uri.
#parameters
def parameters() -> ::Google::Protobuf::Value
- (::Google::Protobuf::Value) — The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
#parameters=
def parameters=(value) -> ::Google::Protobuf::Value
- value (::Google::Protobuf::Value) — The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.
- (::Google::Protobuf::Value) — The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1.Model.predict_schemata] parameters_schema_uri.