Class PredictionServiceClient (2.22.0)

AutoML Prediction API.

On any input that is documented to expect a string parameter in snake_case or dash-case, either of those cases is accepted.

Equality

Instances of this class created via copy-construction or copy-assignment always compare equal. Instances created with equal std::shared_ptr<*Connection> objects compare equal. Objects that compare equal share the same underlying resources.

Performance

Creating a new instance of this class is a relatively expensive operation, new objects establish new connections to the service. In contrast, copy-construction, move-construction, and the corresponding assignment operations are relatively efficient as the copies share all underlying resources.

Thread Safety

Concurrent access to different instances of this class, even if they compare equal, is guaranteed to work. Two or more threads operating on the same instance of this class is not guaranteed to work. Since copy-construction and move-construction is a relatively efficient operation, consider using such a copy when using this class from multiple threads.

Constructors

PredictionServiceClient(PredictionServiceClient const &)

Copy and move support

Parameter
NameDescription
PredictionServiceClient const &

PredictionServiceClient(PredictionServiceClient &&)

Copy and move support

Parameter
NameDescription
PredictionServiceClient &&

PredictionServiceClient(std::shared_ptr< PredictionServiceConnection >, Options)

Parameters
NameDescription
connection std::shared_ptr< PredictionServiceConnection >
opts Options

Operators

operator=(PredictionServiceClient const &)

Copy and move support

Parameter
NameDescription
PredictionServiceClient const &
Returns
TypeDescription
PredictionServiceClient &

operator=(PredictionServiceClient &&)

Copy and move support

Parameter
NameDescription
PredictionServiceClient &&
Returns
TypeDescription
PredictionServiceClient &

Functions

Predict(std::string const &, google::cloud::automl::v1::ExamplePayload const &, std::map< std::string, std::string > const &, Options)

Perform an online prediction.

The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
NameDescription
name std::string const &

Required. Name of the model requested to serve the prediction.

payload google::cloud::automl::v1::ExamplePayload const &

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params std::map< std::string, std::string > const &

Additional domain-specific parameters, any string must be up to 25000 characters long.
For more information, see PredictRequest.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::automl::v1::PredictResponse >

the result of the RPC. The response message type (google.cloud.automl.v1.PredictResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.

Predict(google::cloud::automl::v1::PredictRequest const &, Options)

Perform an online prediction.

The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
NameDescription
request google::cloud::automl::v1::PredictRequest const &

Unary RPCs, such as the one wrapped by this function, receive a single request proto message which includes all the inputs for the RPC. In this case, the proto message is a google.cloud.automl.v1.PredictRequest. Proto messages are converted to C++ classes by Protobuf, using the Protobuf mapping rules.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::automl::v1::PredictResponse >

the result of the RPC. The response message type (google.cloud.automl.v1.PredictResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.

BatchPredict(std::string const &, google::cloud::automl::v1::BatchPredictInputConfig const &, google::cloud::automl::v1::BatchPredictOutputConfig const &, std::map< std::string, std::string > const &, Options)

Perform a batch prediction.

Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
NameDescription
name std::string const &

Required. Name of the model requested to serve the batch prediction.

input_config google::cloud::automl::v1::BatchPredictInputConfig const &

Required. The input configuration for batch prediction.

output_config google::cloud::automl::v1::BatchPredictOutputConfig const &

Required. The Configuration specifying where output predictions should be written.

params std::map< std::string, std::string > const &

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.
For more information, see BatchPredictRequest.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
future< StatusOr< google::cloud::automl::v1::BatchPredictResult > >

A future that becomes satisfied when the LRO (Long Running Operation) completes or the polling policy in effect for this call is exhausted. The future is satisfied with an error if the LRO completes with an error or the polling policy is exhausted. In this case the StatusOr returned by the future contains the error. If the LRO completes successfully the value of the future contains the LRO's result. For this RPC the result is a google.cloud.automl.v1.BatchPredictResult proto message. The C++ class representing this message is created by Protobuf, using the Protobuf mapping rules.

BatchPredict(google::cloud::automl::v1::BatchPredictRequest const &, Options)

Perform a batch prediction.

Unlike the online Predict, batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via GetOperation method. Once the operation is done, BatchPredictResult is returned in the response field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
NameDescription
request google::cloud::automl::v1::BatchPredictRequest const &

Unary RPCs, such as the one wrapped by this function, receive a single request proto message which includes all the inputs for the RPC. In this case, the proto message is a google.cloud.automl.v1.BatchPredictRequest. Proto messages are converted to C++ classes by Protobuf, using the Protobuf mapping rules.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
future< StatusOr< google::cloud::automl::v1::BatchPredictResult > >

A future that becomes satisfied when the LRO (Long Running Operation) completes or the polling policy in effect for this call is exhausted. The future is satisfied with an error if the LRO completes with an error or the polling policy is exhausted. In this case the StatusOr returned by the future contains the error. If the LRO completes successfully the value of the future contains the LRO's result. For this RPC the result is a google.cloud.automl.v1.BatchPredictResult proto message. The C++ class representing this message is created by Protobuf, using the Protobuf mapping rules.