Cloud AI Platform v1 API - Class LlmUtilityServiceClient (2.26.0)

public abstract class LlmUtilityServiceClient

Reference documentation and code samples for the Cloud AI Platform v1 API class LlmUtilityServiceClient.

LlmUtilityService client wrapper, for convenient use.

Inheritance

object > LlmUtilityServiceClient

Namespace

Google.Cloud.AIPlatform.V1

Assembly

Google.Cloud.AIPlatform.V1.dll

Remarks

Service for LLM related utility functions.

Properties

DefaultEndpoint

public static string DefaultEndpoint { get; }

The default endpoint for the LlmUtilityService service, which is a host of "aiplatform.googleapis.com" and a port of 443.

Property Value
TypeDescription
string

DefaultScopes

public static IReadOnlyList<string> DefaultScopes { get; }

The default LlmUtilityService scopes.

Property Value
TypeDescription
IReadOnlyListstring
Remarks

The default LlmUtilityService scopes are:

GrpcClient

public virtual LlmUtilityService.LlmUtilityServiceClient GrpcClient { get; }

The underlying gRPC LlmUtilityService client

Property Value
TypeDescription
LlmUtilityServiceLlmUtilityServiceClient

IAMPolicyClient

public virtual IAMPolicyClient IAMPolicyClient { get; }

The IAMPolicyClient associated with this client.

Property Value
TypeDescription
IAMPolicyClient

LocationsClient

public virtual LocationsClient LocationsClient { get; }

The LocationsClient associated with this client.

Property Value
TypeDescription
LocationsClient

ServiceMetadata

public static ServiceMetadata ServiceMetadata { get; }

The service metadata associated with this client type.

Property Value
TypeDescription
ServiceMetadata

Methods

ComputeTokens(ComputeTokensRequest, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(ComputeTokensRequest request, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
requestComputeTokensRequest

The request object containing all of the parameters for the API call.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
};
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(request);

ComputeTokens(EndpointName, IEnumerable<Value>, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(endpoint, instances);

ComputeTokens(string, IEnumerable<Value>, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(endpoint, instances);

ComputeTokensAsync(ComputeTokensRequest, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(ComputeTokensRequest request, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
requestComputeTokensRequest

The request object containing all of the parameters for the API call.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
};
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(request);

ComputeTokensAsync(ComputeTokensRequest, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(ComputeTokensRequest request, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
NameDescription
requestComputeTokensRequest

The request object containing all of the parameters for the API call.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
};
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(request);

ComputeTokensAsync(EndpointName, IEnumerable<Value>, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(EndpointName, IEnumerable<Value>, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(string, IEnumerable<Value>, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(string, IEnumerable<Value>, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(string endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instancesIEnumerableValue

Required. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

CountTokens(CountTokensRequest, CallSettings)

public virtual CountTokensResponse CountTokens(CountTokensRequest request, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
requestCountTokensRequest

The request object containing all of the parameters for the API call.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
CountTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
CountTokensResponse response = llmUtilityServiceClient.CountTokens(request);

CountTokens(EndpointName, IEnumerable<Value>, CallSettings)

public virtual CountTokensResponse CountTokens(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
CountTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = llmUtilityServiceClient.CountTokens(endpoint, instances);

CountTokens(string, IEnumerable<Value>, CallSettings)

public virtual CountTokensResponse CountTokens(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
CountTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = llmUtilityServiceClient.CountTokens(endpoint, instances);

CountTokensAsync(CountTokensRequest, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(CountTokensRequest request, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
requestCountTokensRequest

The request object containing all of the parameters for the API call.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(request);

CountTokensAsync(CountTokensRequest, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(CountTokensRequest request, CancellationToken cancellationToken)

Perform a token counting.

Parameters
NameDescription
requestCountTokensRequest

The request object containing all of the parameters for the API call.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(request);

CountTokensAsync(EndpointName, IEnumerable<Value>, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(EndpointName, IEnumerable<Value>, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Perform a token counting.

Parameters
NameDescription
endpointEndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(string, IEnumerable<Value>, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettingsCallSettings

If not null, applies overrides to this RPC call.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(string, IEnumerable<Value>, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(string endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Perform a token counting.

Parameters
NameDescription
endpointstring

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instancesIEnumerableValue

Required. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

cancellationTokenCancellationToken

A CancellationToken to use for this RPC.

Returns
TypeDescription
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await llmUtilityServiceClient.CountTokensAsync(endpoint, instances);

Create()

public static LlmUtilityServiceClient Create()

Synchronously creates a LlmUtilityServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use LlmUtilityServiceClientBuilder.

Returns
TypeDescription
LlmUtilityServiceClient

The created LlmUtilityServiceClient.

CreateAsync(CancellationToken)

public static Task<LlmUtilityServiceClient> CreateAsync(CancellationToken cancellationToken = default)

Asynchronously creates a LlmUtilityServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use LlmUtilityServiceClientBuilder.

Parameter
NameDescription
cancellationTokenCancellationToken

The CancellationToken to use while creating the client.

Returns
TypeDescription
TaskLlmUtilityServiceClient

The task representing the created LlmUtilityServiceClient.

ShutdownDefaultChannelsAsync()

public static Task ShutdownDefaultChannelsAsync()

Shuts down any channels automatically created by Create() and CreateAsync(CancellationToken). Channels which weren't automatically created are not affected.

Returns
TypeDescription
Task

A task representing the asynchronous shutdown operation.

Remarks

After calling this method, further calls to Create() and CreateAsync(CancellationToken) will create new channels, which could in turn be shut down by another call to this method.