- 2.54.0 (latest)
- 2.53.0
- 2.52.0
- 2.51.0
- 2.49.0
- 2.48.0
- 2.47.0
- 2.46.0
- 2.45.0
- 2.44.0
- 2.43.0
- 2.42.0
- 2.41.0
- 2.40.0
- 2.39.0
- 2.37.0
- 2.36.0
- 2.35.0
- 2.34.0
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.0
- 2.29.0
- 2.28.0
- 2.27.0
- 2.24.0
- 2.23.0
- 2.22.0
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.18
- 2.2.3
- 2.1.23
public class PredictionServiceClient implements BackgroundResource
Service Description: AutoML Prediction API.
On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
ModelName name = ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = ExamplePayload.newBuilder().build();
Map<String, String> params = new HashMap<>();
PredictResponse response = predictionServiceClient.predict(name, payload, params);
}
Note: close() needs to be called on the PredictionServiceClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
The surface of this class includes several types of Java methods for each of the API's methods:
- A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
- A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
- A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of PredictionServiceSettings to create(). For example:
To customize credentials:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
PredictionServiceSettings predictionServiceSettings =
PredictionServiceSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
PredictionServiceClient predictionServiceClient =
PredictionServiceClient.create(predictionServiceSettings);
To customize the endpoint:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
PredictionServiceSettings predictionServiceSettings =
PredictionServiceSettings.newBuilder().setEndpoint(myEndpoint).build();
PredictionServiceClient predictionServiceClient =
PredictionServiceClient.create(predictionServiceSettings);
To use REST (HTTP1.1/JSON) transport (instead of gRPC) for sending and receiving requests over the wire:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
PredictionServiceSettings predictionServiceSettings =
PredictionServiceSettings.newBuilder()
.setTransportChannelProvider(
PredictionServiceSettings.defaultHttpJsonTransportProviderBuilder().build())
.build();
PredictionServiceClient predictionServiceClient =
PredictionServiceClient.create(predictionServiceSettings);
Please refer to the GitHub repository's samples for more quickstart code snippets.
Implements
BackgroundResourceStatic Methods
create()
public static final PredictionServiceClient create()
Constructs an instance of PredictionServiceClient with default settings.
Type | Description |
PredictionServiceClient |
Type | Description |
IOException |
create(PredictionServiceSettings settings)
public static final PredictionServiceClient create(PredictionServiceSettings settings)
Constructs an instance of PredictionServiceClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.
Name | Description |
settings | PredictionServiceSettings |
Type | Description |
PredictionServiceClient |
Type | Description |
IOException |
create(PredictionServiceStub stub)
public static final PredictionServiceClient create(PredictionServiceStub stub)
Constructs an instance of PredictionServiceClient, using the given stub for making calls. This is for advanced usage - prefer using create(PredictionServiceSettings).
Name | Description |
stub | PredictionServiceStub |
Type | Description |
PredictionServiceClient |
Constructors
PredictionServiceClient(PredictionServiceSettings settings)
protected PredictionServiceClient(PredictionServiceSettings settings)
Constructs an instance of PredictionServiceClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.
Name | Description |
settings | PredictionServiceSettings |
PredictionServiceClient(PredictionServiceStub stub)
protected PredictionServiceClient(PredictionServiceStub stub)
Name | Description |
stub | PredictionServiceStub |
Methods
awaitTermination(long duration, TimeUnit unit)
public boolean awaitTermination(long duration, TimeUnit unit)
Name | Description |
duration | long |
unit | TimeUnit |
Type | Description |
boolean |
Type | Description |
InterruptedException |
batchPredictAsync(BatchPredictRequest request)
public final OperationFuture<BatchPredictResult,OperationMetadata> batchPredictAsync(BatchPredictRequest request)
Perform a batch prediction. Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
BatchPredictRequest request =
BatchPredictRequest.newBuilder()
.setName(ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString())
.setInputConfig(BatchPredictInputConfig.newBuilder().build())
.setOutputConfig(BatchPredictOutputConfig.newBuilder().build())
.putAllParams(new HashMap<String, String>())
.build();
BatchPredictResult response = predictionServiceClient.batchPredictAsync(request).get();
}
Name | Description |
request | BatchPredictRequest The request object containing all of the parameters for the API call. |
Type | Description |
OperationFuture<BatchPredictResult,OperationMetadata> |
batchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, Map<String,String> params)
public final OperationFuture<BatchPredictResult,OperationMetadata> batchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, Map<String,String> params)
Perform a batch prediction. Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
ModelName name = ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = BatchPredictInputConfig.newBuilder().build();
BatchPredictOutputConfig outputConfig = BatchPredictOutputConfig.newBuilder().build();
Map<String, String> params = new HashMap<>();
BatchPredictResult response =
predictionServiceClient.batchPredictAsync(name, inputConfig, outputConfig, params).get();
}
Name | Description |
name | ModelName Required. Name of the model requested to serve the batch prediction. |
inputConfig | BatchPredictInputConfig Required. The input configuration for batch prediction. |
outputConfig | BatchPredictOutputConfig Required. The Configuration specifying where output predictions should be written. |
params | Map<String,String> Required. Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.
feature_imp<span>ortan</span>ce - (boolean) Whether feature importance should be populated in the returned TablesAnnotations. The default is false.
|
Type | Description |
OperationFuture<BatchPredictResult,OperationMetadata> |
batchPredictAsync(String name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, Map<String,String> params)
public final OperationFuture<BatchPredictResult,OperationMetadata> batchPredictAsync(String name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, Map<String,String> params)
Perform a batch prediction. Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
String name = ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString();
BatchPredictInputConfig inputConfig = BatchPredictInputConfig.newBuilder().build();
BatchPredictOutputConfig outputConfig = BatchPredictOutputConfig.newBuilder().build();
Map<String, String> params = new HashMap<>();
BatchPredictResult response =
predictionServiceClient.batchPredictAsync(name, inputConfig, outputConfig, params).get();
}
Name | Description |
name | String Required. Name of the model requested to serve the batch prediction. |
inputConfig | BatchPredictInputConfig Required. The input configuration for batch prediction. |
outputConfig | BatchPredictOutputConfig Required. The Configuration specifying where output predictions should be written. |
params | Map<String,String> Required. Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.
feature_imp<span>ortan</span>ce - (boolean) Whether feature importance should be populated in the returned TablesAnnotations. The default is false.
|
Type | Description |
OperationFuture<BatchPredictResult,OperationMetadata> |
batchPredictCallable()
public final UnaryCallable<BatchPredictRequest,Operation> batchPredictCallable()
Perform a batch prediction. Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
BatchPredictRequest request =
BatchPredictRequest.newBuilder()
.setName(ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString())
.setInputConfig(BatchPredictInputConfig.newBuilder().build())
.setOutputConfig(BatchPredictOutputConfig.newBuilder().build())
.putAllParams(new HashMap<String, String>())
.build();
ApiFuture<Operation> future =
predictionServiceClient.batchPredictCallable().futureCall(request);
// Do something.
Operation response = future.get();
}
Type | Description |
UnaryCallable<BatchPredictRequest,Operation> |
batchPredictOperationCallable()
public final OperationCallable<BatchPredictRequest,BatchPredictResult,OperationMetadata> batchPredictOperationCallable()
Perform a batch prediction. Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML problems:
- Image Classification
- Image Object Detection
- Video Classification
- Video Object Tracking * Text Extraction
- Tables
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
BatchPredictRequest request =
BatchPredictRequest.newBuilder()
.setName(ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString())
.setInputConfig(BatchPredictInputConfig.newBuilder().build())
.setOutputConfig(BatchPredictOutputConfig.newBuilder().build())
.putAllParams(new HashMap<String, String>())
.build();
OperationFuture<BatchPredictResult, OperationMetadata> future =
predictionServiceClient.batchPredictOperationCallable().futureCall(request);
// Do something.
BatchPredictResult response = future.get();
}
Type | Description |
OperationCallable<BatchPredictRequest,BatchPredictResult,OperationMetadata> |
close()
public final void close()
getHttpJsonOperationsClient()
public final OperationsClient getHttpJsonOperationsClient()
Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call.
Type | Description |
OperationsClient |
getOperationsClient()
public final OperationsClient getOperationsClient()
Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call.
Type | Description |
OperationsClient |
getSettings()
public final PredictionServiceSettings getSettings()
Type | Description |
PredictionServiceSettings |
getStub()
public PredictionServiceStub getStub()
Type | Description |
PredictionServiceStub |
isShutdown()
public boolean isShutdown()
Type | Description |
boolean |
isTerminated()
public boolean isTerminated()
Type | Description |
boolean |
predict(ModelName name, ExamplePayload payload, Map<String,String> params)
public final PredictResponse predict(ModelName name, ExamplePayload payload, Map<String,String> params)
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads:
- Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded.
- Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded.
- Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded.
- Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type.
- Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
ModelName name = ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = ExamplePayload.newBuilder().build();
Map<String, String> params = new HashMap<>();
PredictResponse response = predictionServiceClient.predict(name, payload, params);
}
Name | Description |
name | ModelName Required. Name of the model requested to serve the prediction. |
payload | ExamplePayload Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
params | Map<String,String> Additional domain-specific parameters, any string must be up to 25000 characters long.
* For Image Object Detection:
|
Type | Description |
PredictResponse |
predict(PredictRequest request)
public final PredictResponse predict(PredictRequest request)
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads:
- Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded.
- Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded.
- Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded.
- Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type.
- Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
PredictRequest request =
PredictRequest.newBuilder()
.setName(ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString())
.setPayload(ExamplePayload.newBuilder().build())
.putAllParams(new HashMap<String, String>())
.build();
PredictResponse response = predictionServiceClient.predict(request);
}
Name | Description |
request | PredictRequest The request object containing all of the parameters for the API call. |
Type | Description |
PredictResponse |
predict(String name, ExamplePayload payload, Map<String,String> params)
public final PredictResponse predict(String name, ExamplePayload payload, Map<String,String> params)
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads:
- Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded.
- Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded.
- Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded.
- Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type.
- Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
String name = ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString();
ExamplePayload payload = ExamplePayload.newBuilder().build();
Map<String, String> params = new HashMap<>();
PredictResponse response = predictionServiceClient.predict(name, payload, params);
}
Name | Description |
name | String Required. Name of the model requested to serve the prediction. |
payload | ExamplePayload Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve. |
params | Map<String,String> Additional domain-specific parameters, any string must be up to 25000 characters long.
* For Image Object Detection:
|
Type | Description |
PredictResponse |
predictCallable()
public final UnaryCallable<PredictRequest,PredictResponse> predictCallable()
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads:
- Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.
- Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded.
- Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded.
- Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded.
- Tables - Row, with column values matching the columns of the model, up to 5MB. Not available for FORECASTING
prediction_type.
- Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (PredictionServiceClient predictionServiceClient = PredictionServiceClient.create()) {
PredictRequest request =
PredictRequest.newBuilder()
.setName(ModelName.of("[PROJECT]", "[LOCATION]", "[MODEL]").toString())
.setPayload(ExamplePayload.newBuilder().build())
.putAllParams(new HashMap<String, String>())
.build();
ApiFuture<PredictResponse> future =
predictionServiceClient.predictCallable().futureCall(request);
// Do something.
PredictResponse response = future.get();
}
Type | Description |
UnaryCallable<PredictRequest,PredictResponse> |
shutdown()
public void shutdown()
shutdownNow()
public void shutdownNow()