Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class RawPredictRequest.
Request message for PredictionService.RawPredict.
Generated from protobuf message google.cloud.aiplatform.v1.RawPredictRequest
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
Parameters | |
---|---|
Name | Description |
data |
array
Optional. Data for populating the Message object. |
↳ endpoint |
string
Required. The name of the Endpoint requested to serve the prediction. Format: |
↳ http_body |
Google\Api\HttpBody
The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model. You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the |
getEndpoint
Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
Returns | |
---|---|
Type | Description |
string |
setEndpoint
Required. The name of the Endpoint requested to serve the prediction.
Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
Parameter | |
---|---|
Name | Description |
var |
string
|
Returns | |
---|---|
Type | Description |
$this |
getHttpBody
The prediction input. Supports HTTP headers and arbitrary data payload.
A DeployedModel may have an
upper limit on the number of instances it supports per request. When this
limit it is exceeded for an AutoML model, the
RawPredict
method returns an error. When this limit is exceeded for a custom-trained
model, the behavior varies depending on the model.
You can specify the schema for each instance in the
predict_schemata.instance_schema_uri
field when you create a Model. This
schema applies when you deploy the Model
as a DeployedModel
to an
Endpoint and use the RawPredict
method.
Returns | |
---|---|
Type | Description |
Google\Api\HttpBody|null |
hasHttpBody
clearHttpBody
setHttpBody
The prediction input. Supports HTTP headers and arbitrary data payload.
A DeployedModel may have an
upper limit on the number of instances it supports per request. When this
limit it is exceeded for an AutoML model, the
RawPredict
method returns an error. When this limit is exceeded for a custom-trained
model, the behavior varies depending on the model.
You can specify the schema for each instance in the
predict_schemata.instance_schema_uri
field when you create a Model. This
schema applies when you deploy the Model
as a DeployedModel
to an
Endpoint and use the RawPredict
method.
Parameter | |
---|---|
Name | Description |
var |
Google\Api\HttpBody
|
Returns | |
---|---|
Type | Description |
$this |
static::build
Parameters | |
---|---|
Name | Description |
endpoint |
string
Required. The name of the Endpoint requested to serve the prediction.
Format:
|
httpBody |
Google\Api\HttpBody
The prediction input. Supports HTTP headers and arbitrary data payload. A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model. You can specify the schema for each instance in the
predict_schemata.instance_schema_uri
field when you create a Model. This
schema applies when you deploy the |
Returns | |
---|---|
Type | Description |
RawPredictRequest |