Class PredictionServiceClient (3.0.0)

public class PredictionServiceClient implements BackgroundResource

Service Description: A service for online predictions and explanations.

This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   EndpointName endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
   List<Value> instances = new ArrayList<>();
   Value parameters = Value.newBuilder().setBoolValue(true).build();
   PredictResponse response = predictionServiceClient.predict(endpoint, instances, parameters);
 }
 

Note: close() needs to be called on the PredictionServiceClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().

The surface of this class includes several types of Java methods for each of the API's methods:

  1. A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
  2. A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
  3. A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.

See the individual methods for example code.

Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.

This class can be customized by passing in a custom instance of PredictionServiceSettings to create(). For example:

To customize credentials:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 PredictionServiceSettings predictionServiceSettings =
     PredictionServiceSettings.newBuilder()
         .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
         .build();
 PredictionServiceClient predictionServiceClient =
     PredictionServiceClient.create(predictionServiceSettings);
 

To customize the endpoint:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 PredictionServiceSettings predictionServiceSettings =
     PredictionServiceSettings.newBuilder().setEndpoint(myEndpoint).build();
 PredictionServiceClient predictionServiceClient =
     PredictionServiceClient.create(predictionServiceSettings);
 

Please refer to the GitHub repository's samples for more quickstart code snippets.

Inheritance

java.lang.Object > PredictionServiceClient

Implements

BackgroundResource

Static Methods

create()

public static final PredictionServiceClient create()

Constructs an instance of PredictionServiceClient with default settings.

Returns
TypeDescription
PredictionServiceClient
Exceptions
TypeDescription
IOException

create(PredictionServiceSettings settings)

public static final PredictionServiceClient create(PredictionServiceSettings settings)

Constructs an instance of PredictionServiceClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.

Parameter
NameDescription
settingsPredictionServiceSettings
Returns
TypeDescription
PredictionServiceClient
Exceptions
TypeDescription
IOException

create(PredictionServiceStub stub)

public static final PredictionServiceClient create(PredictionServiceStub stub)

Constructs an instance of PredictionServiceClient, using the given stub for making calls. This is for advanced usage - prefer using create(PredictionServiceSettings).

Parameter
NameDescription
stubPredictionServiceStub
Returns
TypeDescription
PredictionServiceClient

Constructors

PredictionServiceClient(PredictionServiceSettings settings)

protected PredictionServiceClient(PredictionServiceSettings settings)

Constructs an instance of PredictionServiceClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.

Parameter
NameDescription
settingsPredictionServiceSettings

PredictionServiceClient(PredictionServiceStub stub)

protected PredictionServiceClient(PredictionServiceStub stub)
Parameter
NameDescription
stubPredictionServiceStub

Methods

awaitTermination(long duration, TimeUnit unit)

public boolean awaitTermination(long duration, TimeUnit unit)
Parameters
NameDescription
durationlong
unitTimeUnit
Returns
TypeDescription
boolean
Exceptions
TypeDescription
InterruptedException

close()

public final void close()

explain(EndpointName endpoint, List<Value> instances, Value parameters, String deployedModelId)

public final ExplainResponse explain(EndpointName endpoint, List<Value> instances, Value parameters, String deployedModelId)

Perform an online explanation.

If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated. Only deployed AutoML tabular Models have explanation_spec.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   EndpointName endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
   List<Value> instances = new ArrayList<>();
   Value parameters = Value.newBuilder().setBoolValue(true).build();
   String deployedModelId = "deployedModelId-1817547906";
   ExplainResponse response =
       predictionServiceClient.explain(endpoint, instances, parameters, deployedModelId);
 }
 
Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesList<Value>

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parametersValue

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

deployedModelIdString

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.

Returns
TypeDescription
ExplainResponse

explain(ExplainRequest request)

public final ExplainResponse explain(ExplainRequest request)

Perform an online explanation.

If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated. Only deployed AutoML tabular Models have explanation_spec.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   ExplainRequest request =
       ExplainRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .addAllInstances(new ArrayList<Value>())
           .setParameters(Value.newBuilder().setBoolValue(true).build())
           .setExplanationSpecOverride(ExplanationSpecOverride.newBuilder().build())
           .setDeployedModelId("deployedModelId-1817547906")
           .build();
   ExplainResponse response = predictionServiceClient.explain(request);
 }
 
Parameter
NameDescription
requestExplainRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
ExplainResponse

explain(String endpoint, List<Value> instances, Value parameters, String deployedModelId)

public final ExplainResponse explain(String endpoint, List<Value> instances, Value parameters, String deployedModelId)

Perform an online explanation.

If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated. Only deployed AutoML tabular Models have explanation_spec.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   String endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString();
   List<Value> instances = new ArrayList<>();
   Value parameters = Value.newBuilder().setBoolValue(true).build();
   String deployedModelId = "deployedModelId-1817547906";
   ExplainResponse response =
       predictionServiceClient.explain(endpoint, instances, parameters, deployedModelId);
 }
 
Parameters
NameDescription
endpointString

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesList<Value>

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parametersValue

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

deployedModelIdString

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding Endpoint.traffic_split.

Returns
TypeDescription
ExplainResponse

explainCallable()

public final UnaryCallable<ExplainRequest,ExplainResponse> explainCallable()

Perform an online explanation.

If deployed_model_id is specified, the corresponding DeployModel must have explanation_spec populated. If deployed_model_id is not specified, all DeployedModels must have explanation_spec populated. Only deployed AutoML tabular Models have explanation_spec.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   ExplainRequest request =
       ExplainRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .addAllInstances(new ArrayList<Value>())
           .setParameters(Value.newBuilder().setBoolValue(true).build())
           .setExplanationSpecOverride(ExplanationSpecOverride.newBuilder().build())
           .setDeployedModelId("deployedModelId-1817547906")
           .build();
   ApiFuture<ExplainResponse> future =
       predictionServiceClient.explainCallable().futureCall(request);
   // Do something.
   ExplainResponse response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<ExplainRequest,ExplainResponse>

getIamPolicy(GetIamPolicyRequest request)

public final Policy getIamPolicy(GetIamPolicyRequest request)

Gets the access control policy for a resource. Returns an empty policyif the resource exists and does not have a policy set.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   GetIamPolicyRequest request =
       GetIamPolicyRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .setOptions(GetPolicyOptions.newBuilder().build())
           .build();
   Policy response = predictionServiceClient.getIamPolicy(request);
 }
 
Parameter
NameDescription
requestcom.google.iam.v1.GetIamPolicyRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
com.google.iam.v1.Policy

getIamPolicyCallable()

public final UnaryCallable<GetIamPolicyRequest,Policy> getIamPolicyCallable()

Gets the access control policy for a resource. Returns an empty policyif the resource exists and does not have a policy set.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   GetIamPolicyRequest request =
       GetIamPolicyRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .setOptions(GetPolicyOptions.newBuilder().build())
           .build();
   ApiFuture<Policy> future = predictionServiceClient.getIamPolicyCallable().futureCall(request);
   // Do something.
   Policy response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<com.google.iam.v1.GetIamPolicyRequest,com.google.iam.v1.Policy>

getLocation(GetLocationRequest request)

public final Location getLocation(GetLocationRequest request)

Gets information about a location.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   GetLocationRequest request = GetLocationRequest.newBuilder().setName("name3373707").build();
   Location response = predictionServiceClient.getLocation(request);
 }
 
Parameter
NameDescription
requestcom.google.cloud.location.GetLocationRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
com.google.cloud.location.Location

getLocationCallable()

public final UnaryCallable<GetLocationRequest,Location> getLocationCallable()

Gets information about a location.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   GetLocationRequest request = GetLocationRequest.newBuilder().setName("name3373707").build();
   ApiFuture<Location> future =
       predictionServiceClient.getLocationCallable().futureCall(request);
   // Do something.
   Location response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<com.google.cloud.location.GetLocationRequest,com.google.cloud.location.Location>

getSettings()

public final PredictionServiceSettings getSettings()
Returns
TypeDescription
PredictionServiceSettings

getStub()

public PredictionServiceStub getStub()
Returns
TypeDescription
PredictionServiceStub

isShutdown()

public boolean isShutdown()
Returns
TypeDescription
boolean

isTerminated()

public boolean isTerminated()
Returns
TypeDescription
boolean

listLocations(ListLocationsRequest request)

public final PredictionServiceClient.ListLocationsPagedResponse listLocations(ListLocationsRequest request)

Lists information about the supported locations for this service.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   ListLocationsRequest request =
       ListLocationsRequest.newBuilder()
           .setName("name3373707")
           .setFilter("filter-1274492040")
           .setPageSize(883849137)
           .setPageToken("pageToken873572522")
           .build();
   for (Location element : predictionServiceClient.listLocations(request).iterateAll()) {
     // doThingsWith(element);
   }
 }
 
Parameter
NameDescription
requestcom.google.cloud.location.ListLocationsRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
PredictionServiceClient.ListLocationsPagedResponse

listLocationsCallable()

public final UnaryCallable<ListLocationsRequest,ListLocationsResponse> listLocationsCallable()

Lists information about the supported locations for this service.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   ListLocationsRequest request =
       ListLocationsRequest.newBuilder()
           .setName("name3373707")
           .setFilter("filter-1274492040")
           .setPageSize(883849137)
           .setPageToken("pageToken873572522")
           .build();
   while (true) {
     ListLocationsResponse response =
         predictionServiceClient.listLocationsCallable().call(request);
     for (Location element : response.getLocationsList()) {
       // doThingsWith(element);
     }
     String nextPageToken = response.getNextPageToken();
     if (!Strings.isNullOrEmpty(nextPageToken)) {
       request = request.toBuilder().setPageToken(nextPageToken).build();
     } else {
       break;
     }
   }
 }
 
Returns
TypeDescription
UnaryCallable<com.google.cloud.location.ListLocationsRequest,com.google.cloud.location.ListLocationsResponse>

listLocationsPagedCallable()

public final UnaryCallable<ListLocationsRequest,PredictionServiceClient.ListLocationsPagedResponse> listLocationsPagedCallable()

Lists information about the supported locations for this service.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   ListLocationsRequest request =
       ListLocationsRequest.newBuilder()
           .setName("name3373707")
           .setFilter("filter-1274492040")
           .setPageSize(883849137)
           .setPageToken("pageToken873572522")
           .build();
   ApiFuture<Location> future =
       predictionServiceClient.listLocationsPagedCallable().futureCall(request);
   // Do something.
   for (Location element : future.get().iterateAll()) {
     // doThingsWith(element);
   }
 }
 
Returns
TypeDescription
UnaryCallable<com.google.cloud.location.ListLocationsRequest,ListLocationsPagedResponse>

predict(EndpointName endpoint, List<Value> instances, Value parameters)

public final PredictResponse predict(EndpointName endpoint, List<Value> instances, Value parameters)

Perform an online prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   EndpointName endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
   List<Value> instances = new ArrayList<>();
   Value parameters = Value.newBuilder().setBoolValue(true).build();
   PredictResponse response = predictionServiceClient.predict(endpoint, instances, parameters);
 }
 
Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesList<Value>

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parametersValue

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

Returns
TypeDescription
PredictResponse

predict(PredictRequest request)

public final PredictResponse predict(PredictRequest request)

Perform an online prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   PredictRequest request =
       PredictRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .addAllInstances(new ArrayList<Value>())
           .setParameters(Value.newBuilder().setBoolValue(true).build())
           .build();
   PredictResponse response = predictionServiceClient.predict(request);
 }
 
Parameter
NameDescription
requestPredictRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
PredictResponse

predict(String endpoint, List<Value> instances, Value parameters)

public final PredictResponse predict(String endpoint, List<Value> instances, Value parameters)

Perform an online prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   String endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString();
   List<Value> instances = new ArrayList<>();
   Value parameters = Value.newBuilder().setBoolValue(true).build();
   PredictResponse response = predictionServiceClient.predict(endpoint, instances, parameters);
 }
 
Parameters
NameDescription
endpointString

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesList<Value>

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri.

parametersValue

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri.

Returns
TypeDescription
PredictResponse

predictCallable()

public final UnaryCallable<PredictRequest,PredictResponse> predictCallable()

Perform an online prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   PredictRequest request =
       PredictRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .addAllInstances(new ArrayList<Value>())
           .setParameters(Value.newBuilder().setBoolValue(true).build())
           .build();
   ApiFuture<PredictResponse> future =
       predictionServiceClient.predictCallable().futureCall(request);
   // Do something.
   PredictResponse response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<PredictRequest,PredictResponse>

rawPredict(EndpointName endpoint, HttpBody httpBody)

public final HttpBody rawPredict(EndpointName endpoint, HttpBody httpBody)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this prediction.
  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's DeployedModel that served this prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   EndpointName endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
   HttpBody httpBody = HttpBody.newBuilder().build();
   HttpBody response = predictionServiceClient.rawPredict(endpoint, httpBody);
 }
 
Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBodycom.google.api.HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the Model as a DeployedModel to an Endpoint and use the RawPredict method.

Returns
TypeDescription
com.google.api.HttpBody

rawPredict(RawPredictRequest request)

public final HttpBody rawPredict(RawPredictRequest request)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this prediction.
  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's DeployedModel that served this prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   RawPredictRequest request =
       RawPredictRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .setHttpBody(HttpBody.newBuilder().build())
           .build();
   HttpBody response = predictionServiceClient.rawPredict(request);
 }
 
Parameter
NameDescription
requestRawPredictRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
com.google.api.HttpBody

rawPredict(String endpoint, HttpBody httpBody)

public final HttpBody rawPredict(String endpoint, HttpBody httpBody)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this prediction.
  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's DeployedModel that served this prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   String endpoint = EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString();
   HttpBody httpBody = HttpBody.newBuilder().build();
   HttpBody response = predictionServiceClient.rawPredict(endpoint, httpBody);
 }
 
Parameters
NameDescription
endpointString

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBodycom.google.api.HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A DeployedModel may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the RawPredict method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the predict_schemata.instance_schema_uri field when you create a Model. This schema applies when you deploy the Model as a DeployedModel to an Endpoint and use the RawPredict method.

Returns
TypeDescription
com.google.api.HttpBody

rawPredictCallable()

public final UnaryCallable<RawPredictRequest,HttpBody> rawPredictCallable()

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this prediction.
  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's DeployedModel that served this prediction.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   RawPredictRequest request =
       RawPredictRequest.newBuilder()
           .setEndpoint(EndpointName.of("[PROJECT]", "[LOCATION]", "[ENDPOINT]").toString())
           .setHttpBody(HttpBody.newBuilder().build())
           .build();
   ApiFuture<HttpBody> future = predictionServiceClient.rawPredictCallable().futureCall(request);
   // Do something.
   HttpBody response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<RawPredictRequest,com.google.api.HttpBody>

setIamPolicy(SetIamPolicyRequest request)

public final Policy setIamPolicy(SetIamPolicyRequest request)

Sets the access control policy on the specified resource. Replacesany existing policy.

Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIEDerrors.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   SetIamPolicyRequest request =
       SetIamPolicyRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .setPolicy(Policy.newBuilder().build())
           .setUpdateMask(FieldMask.newBuilder().build())
           .build();
   Policy response = predictionServiceClient.setIamPolicy(request);
 }
 
Parameter
NameDescription
requestcom.google.iam.v1.SetIamPolicyRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
com.google.iam.v1.Policy

setIamPolicyCallable()

public final UnaryCallable<SetIamPolicyRequest,Policy> setIamPolicyCallable()

Sets the access control policy on the specified resource. Replacesany existing policy.

Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIEDerrors.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   SetIamPolicyRequest request =
       SetIamPolicyRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .setPolicy(Policy.newBuilder().build())
           .setUpdateMask(FieldMask.newBuilder().build())
           .build();
   ApiFuture<Policy> future = predictionServiceClient.setIamPolicyCallable().futureCall(request);
   // Do something.
   Policy response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<com.google.iam.v1.SetIamPolicyRequest,com.google.iam.v1.Policy>

shutdown()

public void shutdown()

shutdownNow()

public void shutdownNow()

testIamPermissions(TestIamPermissionsRequest request)

public final TestIamPermissionsResponse testIamPermissions(TestIamPermissionsRequest request)

Returns permissions that a caller has on the specified resource. If theresource does not exist, this will return an empty set ofpermissions, not a NOT_FOUND error.

Note: This operation is designed to be used for buildingpermission-aware UIs and command-line tools, not for authorizationchecking. This operation may "fail open" without warning.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   TestIamPermissionsRequest request =
       TestIamPermissionsRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .addAllPermissions(new ArrayList<String>())
           .build();
   TestIamPermissionsResponse response = predictionServiceClient.testIamPermissions(request);
 }
 
Parameter
NameDescription
requestcom.google.iam.v1.TestIamPermissionsRequest

The request object containing all of the parameters for the API call.

Returns
TypeDescription
com.google.iam.v1.TestIamPermissionsResponse

testIamPermissionsCallable()

public final UnaryCallable<TestIamPermissionsRequest,TestIamPermissionsResponse> testIamPermissionsCallable()

Returns permissions that a caller has on the specified resource. If theresource does not exist, this will return an empty set ofpermissions, not a NOT_FOUND error.

Note: This operation is designed to be used for buildingpermission-aware UIs and command-line tools, not for authorizationchecking. This operation may "fail open" without warning.

Sample code:


 // This snippet has been automatically generated for illustrative purposes only.
 // It may require modifications to work in your environment.
 try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
   TestIamPermissionsRequest request =
       TestIamPermissionsRequest.newBuilder()
           .setResource(
               EntityTypeName.of("[PROJECT]", "[LOCATION]", "[FEATURESTORE]", "[ENTITY_TYPE]")
                   .toString())
           .addAllPermissions(new ArrayList<String>())
           .build();
   ApiFuture<TestIamPermissionsResponse> future =
       predictionServiceClient.testIamPermissionsCallable().futureCall(request);
   // Do something.
   TestIamPermissionsResponse response = future.get();
 }
 
Returns
TypeDescription
UnaryCallable<com.google.iam.v1.TestIamPermissionsRequest,com.google.iam.v1.TestIamPermissionsResponse>