Cloud AI Platform v1beta1 API - Class LlmUtilityServiceClient (1.0.0-beta09)

public abstract class LlmUtilityServiceClient

Reference documentation and code samples for the Cloud AI Platform v1beta1 API class LlmUtilityServiceClient.

LlmUtilityService client wrapper, for convenient use.

Inheritance

object > LlmUtilityServiceClient

Namespace

Google.Cloud.AIPlatform.V1Beta1

Assembly

Google.Cloud.AIPlatform.V1Beta1.dll

Remarks

Service for LLM related utility functions.

Properties

DefaultEndpoint

public static string DefaultEndpoint { get; }

The default endpoint for the LlmUtilityService service, which is a host of "aiplatform.googleapis.com" and a port of 443.

Property Value
Type Description
string

DefaultScopes

public static IReadOnlyList<string> DefaultScopes { get; }

The default LlmUtilityService scopes.

Property Value
Type Description
IReadOnlyListstring
Remarks

The default LlmUtilityService scopes are:

GrpcClient

public virtual LlmUtilityService.LlmUtilityServiceClient GrpcClient { get; }

The underlying gRPC LlmUtilityService client

Property Value
Type Description
LlmUtilityServiceLlmUtilityServiceClient

IAMPolicyClient

public virtual IAMPolicyClient IAMPolicyClient { get; }

The IAMPolicyClient associated with this client.

Property Value
Type Description
IAMPolicyClient

LocationsClient

public virtual LocationsClient LocationsClient { get; }

The LocationsClient associated with this client.

Property Value
Type Description
LocationsClient

ServiceMetadata

public static ServiceMetadata ServiceMetadata { get; }

The service metadata associated with this client type.

Property Value
Type Description
ServiceMetadata

Methods

ComputeTokens(ComputeTokensRequest, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(ComputeTokensRequest request, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
request ComputeTokensRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(request);

ComputeTokens(EndpointName, IEnumerable<Value>, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(endpoint, instances);

ComputeTokens(string, IEnumerable<Value>, CallSettings)

public virtual ComputeTokensResponse ComputeTokens(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ComputeTokensResponse

The RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = LlmUtilityServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = llmUtilityServiceClient.ComputeTokens(endpoint, instances);

ComputeTokensAsync(ComputeTokensRequest, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(ComputeTokensRequest request, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
request ComputeTokensRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(request);

ComputeTokensAsync(ComputeTokensRequest, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(ComputeTokensRequest request, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
Name Description
request ComputeTokensRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
ComputeTokensRequest request = new ComputeTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
};
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(request);

ComputeTokensAsync(EndpointName, IEnumerable<Value>, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(EndpointName, IEnumerable<Value>, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(string, IEnumerable<Value>, CallSettings)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

ComputeTokensAsync(string, IEnumerable<Value>, CancellationToken)

public virtual Task<ComputeTokensResponse> ComputeTokensAsync(string endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Return a list of tokens based on the input text.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to get lists of tokens and token ids.

instances IEnumerableValue

Optional. The instances that are the input to token computing API call. Schema is identical to the prediction schema of the text model, even for the non-text models, like chat models, or Codey models.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskComputeTokensResponse

A Task containing the RPC response.

Example
// Create client
LlmUtilityServiceClient llmUtilityServiceClient = await LlmUtilityServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
ComputeTokensResponse response = await llmUtilityServiceClient.ComputeTokensAsync(endpoint, instances);

Create()

public static LlmUtilityServiceClient Create()

Synchronously creates a LlmUtilityServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use LlmUtilityServiceClientBuilder.

Returns
Type Description
LlmUtilityServiceClient

The created LlmUtilityServiceClient.

CreateAsync(CancellationToken)

public static Task<LlmUtilityServiceClient> CreateAsync(CancellationToken cancellationToken = default)

Asynchronously creates a LlmUtilityServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use LlmUtilityServiceClientBuilder.

Parameter
Name Description
cancellationToken CancellationToken

The CancellationToken to use while creating the client.

Returns
Type Description
TaskLlmUtilityServiceClient

The task representing the created LlmUtilityServiceClient.

ShutdownDefaultChannelsAsync()

public static Task ShutdownDefaultChannelsAsync()

Shuts down any channels automatically created by Create() and CreateAsync(CancellationToken). Channels which weren't automatically created are not affected.

Returns
Type Description
Task

A task representing the asynchronous shutdown operation.

Remarks

After calling this method, further calls to Create() and CreateAsync(CancellationToken) will create new channels, which could in turn be shut down by another call to this method.