- 3.52.0 (latest)
- 3.50.0
- 3.49.0
- 3.48.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.2
- 3.3.0
- 3.2.0
- 3.0.0
- 2.9.8
- 2.8.9
- 2.7.4
- 2.5.3
- 2.4.0
public abstract static class PredictionServiceGrpc.PredictionServiceImplBase implements BindableService
A service for online predictions and explanations.
Implements
io.grpc.BindableServiceConstructors
PredictionServiceImplBase()
public PredictionServiceImplBase()
Methods
bindService()
public final ServerServiceDefinition bindService()
Returns | |
---|---|
Type | Description |
io.grpc.ServerServiceDefinition |
explain(ExplainRequest request, StreamObserver<ExplainResponse> responseObserver)
public void explain(ExplainRequest request, StreamObserver<ExplainResponse> responseObserver)
Perform an online explanation. If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated. Only deployed AutoML tabular Models have explanation_spec.
Parameters | |
---|---|
Name | Description |
request | ExplainRequest |
responseObserver | io.grpc.stub.StreamObserver<ExplainResponse> |
predict(PredictRequest request, StreamObserver<PredictResponse> responseObserver)
public void predict(PredictRequest request, StreamObserver<PredictResponse> responseObserver)
Perform an online prediction.
Parameters | |
---|---|
Name | Description |
request | PredictRequest |
responseObserver | io.grpc.stub.StreamObserver<PredictResponse> |
rawPredict(RawPredictRequest request, StreamObserver<HttpBody> responseObserver)
public void rawPredict(RawPredictRequest request, StreamObserver<HttpBody> responseObserver)
Perform an online prediction with an arbitrary HTTP payload. The response includes the following HTTP headers:
X-Vertex-AI-Endpoint-Id
: ID of the Endpoint that served this prediction.X-Vertex-AI-Deployed-Model-Id
: ID of the Endpoint's DeployedModel that served this prediction.
Parameters | |
---|---|
Name | Description |
request | RawPredictRequest |
responseObserver | io.grpc.stub.StreamObserver<com.google.api.HttpBody> |