Class LlmUtilityServiceClient (2.23.0-rc)

Service for LLM related utility functions.

Equality

Instances of this class created via copy-construction or copy-assignment always compare equal. Instances created with equal std::shared_ptr<*Connection> objects compare equal. Objects that compare equal share the same underlying resources.

Performance

Creating a new instance of this class is a relatively expensive operation, new objects establish new connections to the service. In contrast, copy-construction, move-construction, and the corresponding assignment operations are relatively efficient as the copies share all underlying resources.

Thread Safety

Concurrent access to different instances of this class, even if they compare equal, is guaranteed to work. Two or more threads operating on the same instance of this class is not guaranteed to work. Since copy-construction and move-construction is a relatively efficient operation, consider using such a copy when using this class from multiple threads.

Constructors

LlmUtilityServiceClient(LlmUtilityServiceClient const &)

Copy and move support

Parameter
NameDescription
LlmUtilityServiceClient const &

LlmUtilityServiceClient(LlmUtilityServiceClient &&)

Copy and move support

Parameter
NameDescription
LlmUtilityServiceClient &&

LlmUtilityServiceClient(std::shared_ptr< LlmUtilityServiceConnection >, Options)

Parameters
NameDescription
connection std::shared_ptr< LlmUtilityServiceConnection >
opts Options

Operators

operator=(LlmUtilityServiceClient const &)

Copy and move support

Parameter
NameDescription
LlmUtilityServiceClient const &
Returns
TypeDescription
LlmUtilityServiceClient &

operator=(LlmUtilityServiceClient &&)

Copy and move support

Parameter
NameDescription
LlmUtilityServiceClient &&
Returns
TypeDescription
LlmUtilityServiceClient &

Functions

CountTokens(std::string const &, std::vector< google::protobuf::Value > const &, Options)

Perform a token counting.

Parameters
NameDescription
endpoint std::string const &

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances std::vector< google::protobuf::Value > const &

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::aiplatform::v1::CountTokensResponse >

the result of the RPC. The response message type (google.cloud.aiplatform.v1.CountTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.

CountTokens(google::cloud::aiplatform::v1::CountTokensRequest const &, Options)

Perform a token counting.

Parameters
NameDescription
request google::cloud::aiplatform::v1::CountTokensRequest const &

Unary RPCs, such as the one wrapped by this function, receive a single request proto message which includes all the inputs for the RPC. In this case, the proto message is a google.cloud.aiplatform.v1.CountTokensRequest. Proto messages are converted to C++ classes by Protobuf, using the Protobuf mapping rules.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::aiplatform::v1::CountTokensResponse >

the result of the RPC. The response message type (google.cloud.aiplatform.v1.CountTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.

ComputeTokens(std::string const &, std::vector< google::protobuf::Value > const &, Options)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpoint std::string const &

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances std::vector< google::protobuf::Value > const &

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::aiplatform::v1::ComputeTokensResponse >

the result of the RPC. The response message type (google.cloud.aiplatform.v1.ComputeTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.

ComputeTokens(google::cloud::aiplatform::v1::ComputeTokensRequest const &, Options)

Return a list of tokens based on the input text.

Parameters
NameDescription
request google::cloud::aiplatform::v1::ComputeTokensRequest const &

Unary RPCs, such as the one wrapped by this function, receive a single request proto message which includes all the inputs for the RPC. In this case, the proto message is a google.cloud.aiplatform.v1.ComputeTokensRequest. Proto messages are converted to C++ classes by Protobuf, using the Protobuf mapping rules.

opts Options

Optional. Override the class-level options, such as retry and backoff policies.

Returns
TypeDescription
StatusOr< google::cloud::aiplatform::v1::ComputeTokensResponse >

the result of the RPC. The response message type (google.cloud.aiplatform.v1.ComputeTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the StatusOr contains the error details.