Index
PredictionService
(interface)PredictRequest
(message)PredictResponse
(message)RawPredictRequest
(message)
PredictionService
A service for online predictions and explanations.
Predict |
---|
Perform an online prediction. |
RawPredict |
---|
Perform an online prediction with an arbitrary HTTP payload. The response includes the following HTTP headers:
|
PredictRequest
Request message for PredictionService.Predict
.
Fields | |
---|---|
endpoint |
Required. The name of the Endpoint requested to serve the prediction. Format: |
instances[] |
Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request. In case of customer-created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified on the Endpoint's DeployedModels. |
parameters |
The parameters that govern the prediction. The schema of the parameters may be specified on the Endpoint's DeployedModels. |
PredictResponse
Response message for PredictionService.Predict
.
Fields | |
---|---|
predictions[] |
The predictions that are the output of the predictions call. The schema of any single prediction may be specified on the Endpoint's DeployedModels. |
deployed_model_id |
ID of the Endpoint's DeployedModel that served this prediction. |
model |
Output only. The resource name of the Model which is deployed as the DeployedModel that this prediction hits. |
model_version_id |
Output only. The version ID of the Model which is deployed as the DeployedModel that this prediction hits. |
model_display_name |
Output only. The display name of the Model which is deployed as the DeployedModel that this prediction hits. |
metadata |
Output only. Request-level metadata returned by the model. The metadata type will be dependent upon the model implementation. |
RawPredictRequest
Request message for PredictionService.RawPredict
.
Fields | |
---|---|
endpoint |
Required. The name of the Endpoint requested to serve the prediction. Format: |
http_body |
The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the You can specify the schema for each instance in the PredictSchemata field when you create a Model. This schema applies when you deploy the |