API for using Large Models that generate multimodal content and have additional capabilities beyond text generation.
Equality
Instances of this class created via copy-construction or copy-assignment always compare equal. Instances created with equal std::shared_ptr<*Connection>
objects compare equal. Objects that compare equal share the same underlying resources.
Performance
Creating a new instance of this class is a relatively expensive operation, new objects establish new connections to the service. In contrast, copy-construction, move-construction, and the corresponding assignment operations are relatively efficient as the copies share all underlying resources.
Thread Safety
Concurrent access to different instances of this class, even if they compare equal, is guaranteed to work. Two or more threads operating on the same instance of this class is not guaranteed to work. Since copy-construction and move-construction is a relatively efficient operation, consider using such a copy when using this class from multiple threads.
Constructors
GenerativeServiceClient(GenerativeServiceClient const &)
Copy and move support
Parameter | |
---|---|
Name | Description |
|
GenerativeServiceClient const &
|
GenerativeServiceClient(GenerativeServiceClient &&)
Copy and move support
Parameter | |
---|---|
Name | Description |
|
GenerativeServiceClient &&
|
GenerativeServiceClient(std::shared_ptr< GenerativeServiceConnection >, Options)
Parameters | |
---|---|
Name | Description |
connection |
std::shared_ptr< GenerativeServiceConnection >
|
opts |
Options
|
Operators
operator=(GenerativeServiceClient const &)
Copy and move support
Parameter | |
---|---|
Name | Description |
|
GenerativeServiceClient const &
|
Returns | |
---|---|
Type | Description |
GenerativeServiceClient & |
operator=(GenerativeServiceClient &&)
Copy and move support
Parameter | |
---|---|
Name | Description |
|
GenerativeServiceClient &&
|
Returns | |
---|---|
Type | Description |
GenerativeServiceClient & |
Functions
GenerateContent(std::string const &, std::vector< google::ai::generativelanguage::v1::Content > const &, Options)
Generates a model response given an input GenerateContentRequest
.
Refer to the text generation guide for detailed usage information. Input capabilities differ between models, including tuned models. Refer to the model guide and tuning guide for details.
Parameters | |
---|---|
Name | Description |
model |
std::string const &
Required. The name of the |
contents |
std::vector< google::ai::generativelanguage::v1::Content > const &
Required. The content of the current conversation with the model. |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::GenerateContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.GenerateContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
GenerateContent(google::ai::generativelanguage::v1::GenerateContentRequest const &, Options)
Generates a model response given an input GenerateContentRequest
.
Refer to the text generation guide for detailed usage information. Input capabilities differ between models, including tuned models. Refer to the model guide and tuning guide for details.
Parameters | |
---|---|
Name | Description |
request |
google::ai::generativelanguage::v1::GenerateContentRequest const &
Unary RPCs, such as the one wrapped by this function, receive a single |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::GenerateContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.GenerateContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
StreamGenerateContent(std::string const &, std::vector< google::ai::generativelanguage::v1::Content > const &, Options)
Generates a streamed response from the model given an input GenerateContentRequest
.
Parameters | |
---|---|
Name | Description |
model |
std::string const &
Required. The name of the |
contents |
std::vector< google::ai::generativelanguage::v1::Content > const &
Required. The content of the current conversation with the model. |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StreamRange< google::ai::generativelanguage::v1::GenerateContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.GenerateContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
StreamGenerateContent(google::ai::generativelanguage::v1::GenerateContentRequest const &, Options)
Generates a streamed response from the model given an input GenerateContentRequest
.
Parameters | |
---|---|
Name | Description |
request |
google::ai::generativelanguage::v1::GenerateContentRequest const &
Unary RPCs, such as the one wrapped by this function, receive a single |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StreamRange< google::ai::generativelanguage::v1::GenerateContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.GenerateContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
EmbedContent(std::string const &, google::ai::generativelanguage::v1::Content const &, Options)
Generates a text embedding vector from the input Content
using the specified Gemini Embedding model.
Parameters | |
---|---|
Name | Description |
model |
std::string const &
Required. The model's resource name. This serves as an ID for the Model to use. |
content |
google::ai::generativelanguage::v1::Content const &
Required. The content to embed. Only the |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::EmbedContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.EmbedContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
EmbedContent(google::ai::generativelanguage::v1::EmbedContentRequest const &, Options)
Generates a text embedding vector from the input Content
using the specified Gemini Embedding model.
Parameters | |
---|---|
Name | Description |
request |
google::ai::generativelanguage::v1::EmbedContentRequest const &
Unary RPCs, such as the one wrapped by this function, receive a single |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::EmbedContentResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.EmbedContentResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
BatchEmbedContents(std::string const &, std::vector< google::ai::generativelanguage::v1::EmbedContentRequest > const &, Options)
Generates multiple embedding vectors from the input Content
which consists of a batch of strings represented as EmbedContentRequest
objects.
Parameters | |
---|---|
Name | Description |
model |
std::string const &
Required. The model's resource name. This serves as an ID for the Model to use. |
requests |
std::vector< google::ai::generativelanguage::v1::EmbedContentRequest > const &
Required. Embed requests for the batch. The model in each of these requests must match the model specified |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::BatchEmbedContentsResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.BatchEmbedContentsResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
BatchEmbedContents(google::ai::generativelanguage::v1::BatchEmbedContentsRequest const &, Options)
Generates multiple embedding vectors from the input Content
which consists of a batch of strings represented as EmbedContentRequest
objects.
Parameters | |
---|---|
Name | Description |
request |
google::ai::generativelanguage::v1::BatchEmbedContentsRequest const &
Unary RPCs, such as the one wrapped by this function, receive a single |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::BatchEmbedContentsResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.BatchEmbedContentsResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
CountTokens(std::string const &, std::vector< google::ai::generativelanguage::v1::Content > const &, Options)
Runs a model's tokenizer on input Content
and returns the token count.
Refer to the tokens guide to learn more about tokens.
Parameters | |
---|---|
Name | Description |
model |
std::string const &
Required. The model's resource name. This serves as an ID for the Model to use. |
contents |
std::vector< google::ai::generativelanguage::v1::Content > const &
Optional. The input given to the model as a prompt. This field is ignored when |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::CountTokensResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.CountTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |
CountTokens(google::ai::generativelanguage::v1::CountTokensRequest const &, Options)
Runs a model's tokenizer on input Content
and returns the token count.
Refer to the tokens guide to learn more about tokens.
Parameters | |
---|---|
Name | Description |
request |
google::ai::generativelanguage::v1::CountTokensRequest const &
Unary RPCs, such as the one wrapped by this function, receive a single |
opts |
Options
Optional. Override the class-level options, such as retry and backoff policies. |
Returns | |
---|---|
Type | Description |
StatusOr< google::ai::generativelanguage::v1::CountTokensResponse > |
the result of the RPC. The response message type (google.ai.generativelanguage.v1.CountTokensResponse) is mapped to a C++ class using the Protobuf mapping rules. If the request fails, the |