Send feedback
Interface PredictionServiceGrpc.AsyncService (3.35.0)
Stay organized with collections
Save and categorize content based on your preferences.
Version 3.35.0 keyboard_arrow_down
public static interface PredictionServiceGrpc . AsyncService
A service for online predictions and explanations.
Methods
public default void countTokens ( CountTokensRequest request , StreamObserver<CountTokensResponse> responseObserver )
Perform a token counting.
public default void directPredict ( DirectPredictRequest request , StreamObserver<DirectPredictResponse> responseObserver )
Perform an unary online prediction request for Vertex first-party products
and frameworks.
public default void directRawPredict ( DirectRawPredictRequest request , StreamObserver<DirectRawPredictResponse> responseObserver )
Perform an online prediction request through gRPC.
public default void explain ( ExplainRequest request , StreamObserver<ExplainResponse> responseObserver )
Perform an online explanation.
If
deployed_model_id
is specified, the corresponding DeployModel must have
explanation_spec
populated. If
deployed_model_id
is not specified, all DeployedModels must have
explanation_spec
populated.
public default void predict ( PredictRequest request , StreamObserver<PredictResponse> responseObserver )
Perform an online prediction.
rawPredict(RawPredictRequest request, StreamObserver<HttpBody> responseObserver)
public default void rawPredict ( RawPredictRequest request , StreamObserver<HttpBody> responseObserver )
Perform an online prediction with an arbitrary HTTP payload.
The response includes the following HTTP headers:
X-Vertex-AI-Endpoint-Id
: ID of the
Endpoint that served this
prediction.
X-Vertex-AI-Deployed-Model-Id
: ID of the Endpoint's
DeployedModel that served
this prediction.
Parameters Name Description request
RawPredictRequest
responseObserver
io.grpc.stub.StreamObserver <com.google.api.HttpBody >
public default void serverStreamingPredict ( StreamingPredictRequest request , StreamObserver<StreamingPredictResponse> responseObserver )
Perform a server-side streaming online prediction request for Vertex
LLM streaming.
streamGenerateContent(GenerateContentRequest request, StreamObserver<GenerateContentResponse> responseObserver)
public default void streamGenerateContent ( GenerateContentRequest request , StreamObserver<GenerateContentResponse> responseObserver )
Generate content with multimodal inputs with streaming support.
public default StreamObserver<StreamingPredictRequest> streamingPredict ( StreamObserver<StreamingPredictResponse> responseObserver )
Perform a streaming online prediction request for Vertex first-party
products and frameworks.
public default StreamObserver<StreamingRawPredictRequest> streamingRawPredict ( StreamObserver<StreamingRawPredictResponse> responseObserver )
Perform a streaming online prediction request through gRPC.
Send feedback
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-10-30 UTC.
Need to tell us more?
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-30 UTC."],[],[]]