Cloud AI Platform v1beta1 API - Class PredictionServiceClient (1.0.0-beta12)

public abstract class PredictionServiceClient

Reference documentation and code samples for the Cloud AI Platform v1beta1 API class PredictionServiceClient.

PredictionService client wrapper, for convenient use.

Inheritance

object > PredictionServiceClient

Namespace

Google.Cloud.AIPlatform.V1Beta1

Assembly

Google.Cloud.AIPlatform.V1Beta1.dll

Remarks

A service for online predictions and explanations.

Properties

DefaultEndpoint

public static string DefaultEndpoint { get; }

The default endpoint for the PredictionService service, which is a host of "aiplatform.googleapis.com" and a port of 443.

Property Value
Type Description
string

DefaultScopes

public static IReadOnlyList<string> DefaultScopes { get; }

The default PredictionService scopes.

Property Value
Type Description
IReadOnlyListstring
Remarks

GrpcClient

public virtual PredictionService.PredictionServiceClient GrpcClient { get; }

The underlying gRPC PredictionService client

Property Value
Type Description
PredictionServicePredictionServiceClient

IAMPolicyClient

public virtual IAMPolicyClient IAMPolicyClient { get; }

The IAMPolicyClient associated with this client.

Property Value
Type Description
IAMPolicyClient

LocationsClient

public virtual LocationsClient LocationsClient { get; }

The LocationsClient associated with this client.

Property Value
Type Description
LocationsClient

ServiceMetadata

public static ServiceMetadata ServiceMetadata { get; }

The service metadata associated with this client type.

Property Value
Type Description
ServiceMetadata

Methods

ChatCompletions(ChatCompletionsRequest, CallSettings)

public virtual PredictionServiceClient.ChatCompletionsStream ChatCompletions(ChatCompletionsRequest request, CallSettings callSettings = null)

Exposes an OpenAI-compatible endpoint for chat completions.

Parameters
Name Description
request ChatCompletionsRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientChatCompletionsStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ChatCompletionsRequest request = new ChatCompletionsRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    HttpBody = new HttpBody(),
};
// Make the request, returning a streaming response
using PredictionServiceClient.ChatCompletionsStream response = predictionServiceClient.ChatCompletions(request);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

ChatCompletions(EndpointName, HttpBody, CallSettings)

public virtual PredictionServiceClient.ChatCompletionsStream ChatCompletions(EndpointName endpoint, HttpBody httpBody, CallSettings callSettings = null)

Exposes an OpenAI-compatible endpoint for chat completions.

Parameters
Name Description
endpoint EndpointName

Required. The name of the endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

Optional. The prediction input. Supports HTTP headers and arbitrary data payload.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientChatCompletionsStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
HttpBody httpBody = new HttpBody();
// Make the request, returning a streaming response
using PredictionServiceClient.ChatCompletionsStream response = predictionServiceClient.ChatCompletions(endpoint, httpBody);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

ChatCompletions(string, HttpBody, CallSettings)

public virtual PredictionServiceClient.ChatCompletionsStream ChatCompletions(string endpoint, HttpBody httpBody, CallSettings callSettings = null)

Exposes an OpenAI-compatible endpoint for chat completions.

Parameters
Name Description
endpoint string

Required. The name of the endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

Optional. The prediction input. Supports HTTP headers and arbitrary data payload.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientChatCompletionsStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
HttpBody httpBody = new HttpBody();
// Make the request, returning a streaming response
using PredictionServiceClient.ChatCompletionsStream response = predictionServiceClient.ChatCompletions(endpoint, httpBody);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

CountTokens(CountTokensRequest, CallSettings)

public virtual CountTokensResponse CountTokens(CountTokensRequest request, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
request CountTokensRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
CountTokensResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
    SystemInstruction = new Content(),
    Tools = { new Tool(), },
    GenerationConfig = new GenerationConfig(),
};
// Make the request
CountTokensResponse response = predictionServiceClient.CountTokens(request);

CountTokens(EndpointName, IEnumerable<Value>, CallSettings)

public virtual CountTokensResponse CountTokens(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
CountTokensResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = predictionServiceClient.CountTokens(endpoint, instances);

CountTokens(string, IEnumerable<Value>, CallSettings)

public virtual CountTokensResponse CountTokens(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
CountTokensResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = predictionServiceClient.CountTokens(endpoint, instances);

CountTokensAsync(CountTokensRequest, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(CountTokensRequest request, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
request CountTokensRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
    SystemInstruction = new Content(),
    Tools = { new Tool(), },
    GenerationConfig = new GenerationConfig(),
};
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(request);

CountTokensAsync(CountTokensRequest, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(CountTokensRequest request, CancellationToken cancellationToken)

Perform a token counting.

Parameters
Name Description
request CountTokensRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
CountTokensRequest request = new CountTokensRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Model = "",
    Contents = { new Content(), },
    SystemInstruction = new Content(),
    Tools = { new Tool(), },
    GenerationConfig = new GenerationConfig(),
};
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(request);

CountTokensAsync(EndpointName, IEnumerable<Value>, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(EndpointName, IEnumerable<Value>, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(EndpointName endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Perform a token counting.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(string, IEnumerable<Value>, CallSettings)

public virtual Task<CountTokensResponse> CountTokensAsync(string endpoint, IEnumerable<Value> instances, CallSettings callSettings = null)

Perform a token counting.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(endpoint, instances);

CountTokensAsync(string, IEnumerable<Value>, CancellationToken)

public virtual Task<CountTokensResponse> CountTokensAsync(string endpoint, IEnumerable<Value> instances, CancellationToken cancellationToken)

Perform a token counting.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskCountTokensResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
// Make the request
CountTokensResponse response = await predictionServiceClient.CountTokensAsync(endpoint, instances);

Create()

public static PredictionServiceClient Create()

Synchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.

Returns
Type Description
PredictionServiceClient

The created PredictionServiceClient.

CreateAsync(CancellationToken)

public static Task<PredictionServiceClient> CreateAsync(CancellationToken cancellationToken = default)

Asynchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.

Parameter
Name Description
cancellationToken CancellationToken

The CancellationToken to use while creating the client.

Returns
Type Description
TaskPredictionServiceClient

The task representing the created PredictionServiceClient.

DirectPredict(DirectPredictRequest, CallSettings)

public virtual DirectPredictResponse DirectPredict(DirectPredictRequest request, CallSettings callSettings = null)

Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Parameters
Name Description
request DirectPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
DirectPredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
DirectPredictRequest request = new DirectPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Inputs = { new Tensor(), },
    Parameters = new Tensor(),
};
// Make the request
DirectPredictResponse response = predictionServiceClient.DirectPredict(request);

DirectPredictAsync(DirectPredictRequest, CallSettings)

public virtual Task<DirectPredictResponse> DirectPredictAsync(DirectPredictRequest request, CallSettings callSettings = null)

Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Parameters
Name Description
request DirectPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskDirectPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
DirectPredictRequest request = new DirectPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Inputs = { new Tensor(), },
    Parameters = new Tensor(),
};
// Make the request
DirectPredictResponse response = await predictionServiceClient.DirectPredictAsync(request);

DirectPredictAsync(DirectPredictRequest, CancellationToken)

public virtual Task<DirectPredictResponse> DirectPredictAsync(DirectPredictRequest request, CancellationToken cancellationToken)

Perform an unary online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Parameters
Name Description
request DirectPredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskDirectPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
DirectPredictRequest request = new DirectPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Inputs = { new Tensor(), },
    Parameters = new Tensor(),
};
// Make the request
DirectPredictResponse response = await predictionServiceClient.DirectPredictAsync(request);

DirectRawPredict(DirectRawPredictRequest, CallSettings)

public virtual DirectRawPredictResponse DirectRawPredict(DirectRawPredictRequest request, CallSettings callSettings = null)

Perform an unary online prediction request to a gRPC model server for custom containers.

Parameters
Name Description
request DirectRawPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
DirectRawPredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
DirectRawPredictRequest request = new DirectRawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    MethodName = "",
    Input = ByteString.Empty,
};
// Make the request
DirectRawPredictResponse response = predictionServiceClient.DirectRawPredict(request);

DirectRawPredictAsync(DirectRawPredictRequest, CallSettings)

public virtual Task<DirectRawPredictResponse> DirectRawPredictAsync(DirectRawPredictRequest request, CallSettings callSettings = null)

Perform an unary online prediction request to a gRPC model server for custom containers.

Parameters
Name Description
request DirectRawPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskDirectRawPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
DirectRawPredictRequest request = new DirectRawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    MethodName = "",
    Input = ByteString.Empty,
};
// Make the request
DirectRawPredictResponse response = await predictionServiceClient.DirectRawPredictAsync(request);

DirectRawPredictAsync(DirectRawPredictRequest, CancellationToken)

public virtual Task<DirectRawPredictResponse> DirectRawPredictAsync(DirectRawPredictRequest request, CancellationToken cancellationToken)

Perform an unary online prediction request to a gRPC model server for custom containers.

Parameters
Name Description
request DirectRawPredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskDirectRawPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
DirectRawPredictRequest request = new DirectRawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    MethodName = "",
    Input = ByteString.Empty,
};
// Make the request
DirectRawPredictResponse response = await predictionServiceClient.DirectRawPredictAsync(request);

Explain(EndpointName, IEnumerable<Value>, Value, string, CallSettings)

public virtual ExplainResponse Explain(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ExplainResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = predictionServiceClient.Explain(endpoint, instances, parameters, deployedModelId);

Explain(ExplainRequest, CallSettings)

public virtual ExplainResponse Explain(ExplainRequest request, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
request ExplainRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ExplainResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ExplainRequest request = new ExplainRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    DeployedModelId = "",
    Parameters = new wkt::Value(),
    ExplanationSpecOverride = new ExplanationSpecOverride(),
    ConcurrentExplanationSpecOverride =
    {
        {
            "",
            new ExplanationSpecOverride()
        },
    },
};
// Make the request
ExplainResponse response = predictionServiceClient.Explain(request);

Explain(string, IEnumerable<Value>, Value, string, CallSettings)

public virtual ExplainResponse Explain(string endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
ExplainResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = predictionServiceClient.Explain(endpoint, instances, parameters, deployedModelId);

ExplainAsync(EndpointName, IEnumerable<Value>, Value, string, CallSettings)

public virtual Task<ExplainResponse> ExplainAsync(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(endpoint, instances, parameters, deployedModelId);

ExplainAsync(EndpointName, IEnumerable<Value>, Value, string, CancellationToken)

public virtual Task<ExplainResponse> ExplainAsync(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CancellationToken cancellationToken)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(endpoint, instances, parameters, deployedModelId);

ExplainAsync(ExplainRequest, CallSettings)

public virtual Task<ExplainResponse> ExplainAsync(ExplainRequest request, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
request ExplainRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ExplainRequest request = new ExplainRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    DeployedModelId = "",
    Parameters = new wkt::Value(),
    ExplanationSpecOverride = new ExplanationSpecOverride(),
    ConcurrentExplanationSpecOverride =
    {
        {
            "",
            new ExplanationSpecOverride()
        },
    },
};
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(request);

ExplainAsync(ExplainRequest, CancellationToken)

public virtual Task<ExplainResponse> ExplainAsync(ExplainRequest request, CancellationToken cancellationToken)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
request ExplainRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ExplainRequest request = new ExplainRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    DeployedModelId = "",
    Parameters = new wkt::Value(),
    ExplanationSpecOverride = new ExplanationSpecOverride(),
    ConcurrentExplanationSpecOverride =
    {
        {
            "",
            new ExplanationSpecOverride()
        },
    },
};
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(request);

ExplainAsync(string, IEnumerable<Value>, Value, string, CallSettings)

public virtual Task<ExplainResponse> ExplainAsync(string endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CallSettings callSettings = null)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(endpoint, instances, parameters, deployedModelId);

ExplainAsync(string, IEnumerable<Value>, Value, string, CancellationToken)

public virtual Task<ExplainResponse> ExplainAsync(string endpoint, IEnumerable<Value> instances, Value parameters, string deployedModelId, CancellationToken cancellationToken)

Perform an online explanation.

If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is specified, the corresponding DeployModel must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated. If [deployed_model_id][google.cloud.aiplatform.v1beta1.ExplainRequest.deployed_model_id] is not specified, all DeployedModels must have [explanation_spec][google.cloud.aiplatform.v1beta1.DeployedModel.explanation_spec] populated.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the explanation. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the explanation call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the explanation call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

deployedModelId string

If specified, this ExplainRequest will be served by the chosen DeployedModel, overriding [Endpoint.traffic_split][google.cloud.aiplatform.v1beta1.Endpoint.traffic_split].

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskExplainResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
string deployedModelId = "";
// Make the request
ExplainResponse response = await predictionServiceClient.ExplainAsync(endpoint, instances, parameters, deployedModelId);

GenerateContent(GenerateContentRequest, CallSettings)

public virtual GenerateContentResponse GenerateContent(GenerateContentRequest request, CallSettings callSettings = null)

Generate content with multimodal inputs.

Parameters
Name Description
request GenerateContentRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
GenerateContentResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
GenerateContentRequest request = new GenerateContentRequest
{
    Contents = { new Content(), },
    SafetySettings =
    {
        new SafetySetting(),
    },
    GenerationConfig = new GenerationConfig(),
    Model = "",
    Tools = { new Tool(), },
    ToolConfig = new ToolConfig(),
    SystemInstruction = new Content(),
    CachedContentAsCachedContentName = CachedContentName.FromProjectLocationCachedContent("[PROJECT]", "[LOCATION]", "[CACHED_CONTENT]"),
    Labels = { { "", "" }, },
};
// Make the request
GenerateContentResponse response = predictionServiceClient.GenerateContent(request);

GenerateContent(string, IEnumerable<Content>, CallSettings)

public virtual GenerateContentResponse GenerateContent(string model, IEnumerable<Content> contents, CallSettings callSettings = null)

Generate content with multimodal inputs.

Parameters
Name Description
model string

Required. The fully qualified name of the publisher model or tuned model endpoint to use.

Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*

Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}

contents IEnumerableContent

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
GenerateContentResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string model = "";
IEnumerable<Content> contents = new Content[] { new Content(), };
// Make the request
GenerateContentResponse response = predictionServiceClient.GenerateContent(model, contents);

GenerateContentAsync(GenerateContentRequest, CallSettings)

public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request, CallSettings callSettings = null)

Generate content with multimodal inputs.

Parameters
Name Description
request GenerateContentRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskGenerateContentResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
GenerateContentRequest request = new GenerateContentRequest
{
    Contents = { new Content(), },
    SafetySettings =
    {
        new SafetySetting(),
    },
    GenerationConfig = new GenerationConfig(),
    Model = "",
    Tools = { new Tool(), },
    ToolConfig = new ToolConfig(),
    SystemInstruction = new Content(),
    CachedContentAsCachedContentName = CachedContentName.FromProjectLocationCachedContent("[PROJECT]", "[LOCATION]", "[CACHED_CONTENT]"),
    Labels = { { "", "" }, },
};
// Make the request
GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(request);

GenerateContentAsync(GenerateContentRequest, CancellationToken)

public virtual Task<GenerateContentResponse> GenerateContentAsync(GenerateContentRequest request, CancellationToken cancellationToken)

Generate content with multimodal inputs.

Parameters
Name Description
request GenerateContentRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskGenerateContentResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
GenerateContentRequest request = new GenerateContentRequest
{
    Contents = { new Content(), },
    SafetySettings =
    {
        new SafetySetting(),
    },
    GenerationConfig = new GenerationConfig(),
    Model = "",
    Tools = { new Tool(), },
    ToolConfig = new ToolConfig(),
    SystemInstruction = new Content(),
    CachedContentAsCachedContentName = CachedContentName.FromProjectLocationCachedContent("[PROJECT]", "[LOCATION]", "[CACHED_CONTENT]"),
    Labels = { { "", "" }, },
};
// Make the request
GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(request);

GenerateContentAsync(string, IEnumerable<Content>, CallSettings)

public virtual Task<GenerateContentResponse> GenerateContentAsync(string model, IEnumerable<Content> contents, CallSettings callSettings = null)

Generate content with multimodal inputs.

Parameters
Name Description
model string

Required. The fully qualified name of the publisher model or tuned model endpoint to use.

Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*

Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}

contents IEnumerableContent

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskGenerateContentResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string model = "";
IEnumerable<Content> contents = new Content[] { new Content(), };
// Make the request
GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(model, contents);

GenerateContentAsync(string, IEnumerable<Content>, CancellationToken)

public virtual Task<GenerateContentResponse> GenerateContentAsync(string model, IEnumerable<Content> contents, CancellationToken cancellationToken)

Generate content with multimodal inputs.

Parameters
Name Description
model string

Required. The fully qualified name of the publisher model or tuned model endpoint to use.

Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*

Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}

contents IEnumerableContent

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskGenerateContentResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string model = "";
IEnumerable<Content> contents = new Content[] { new Content(), };
// Make the request
GenerateContentResponse response = await predictionServiceClient.GenerateContentAsync(model, contents);

Predict(EndpointName, IEnumerable<Value>, Value, CallSettings)

public virtual PredictResponse Predict(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = predictionServiceClient.Predict(endpoint, instances, parameters);

Predict(PredictRequest, CallSettings)

public virtual PredictResponse Predict(PredictRequest request, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Parameters = new wkt::Value(),
};
// Make the request
PredictResponse response = predictionServiceClient.Predict(request);

Predict(string, IEnumerable<Value>, Value, CallSettings)

public virtual PredictResponse Predict(string endpoint, IEnumerable<Value> instances, Value parameters, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = predictionServiceClient.Predict(endpoint, instances, parameters);

PredictAsync(EndpointName, IEnumerable<Value>, Value, CallSettings)

public virtual Task<PredictResponse> PredictAsync(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(endpoint, instances, parameters);

PredictAsync(EndpointName, IEnumerable<Value>, Value, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(EndpointName endpoint, IEnumerable<Value> instances, Value parameters, CancellationToken cancellationToken)

Perform an online prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(endpoint, instances, parameters);

PredictAsync(PredictRequest, CallSettings)

public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Parameters = new wkt::Value(),
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);

PredictAsync(PredictRequest, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CancellationToken cancellationToken)

Perform an online prediction.

Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Instances = { new wkt::Value(), },
    Parameters = new wkt::Value(),
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);

PredictAsync(string, IEnumerable<Value>, Value, CallSettings)

public virtual Task<PredictResponse> PredictAsync(string endpoint, IEnumerable<Value> instances, Value parameters, CallSettings callSettings = null)

Perform an online prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(endpoint, instances, parameters);

PredictAsync(string, IEnumerable<Value>, Value, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(string endpoint, IEnumerable<Value> instances, Value parameters, CancellationToken cancellationToken)

Perform an online prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

instances IEnumerableValue

Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri].

parameters Value

The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' [Model's ][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [parameters_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.parameters_schema_uri].

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
IEnumerable<wkt::Value> instances = new wkt::Value[] { new wkt::Value(), };
wkt::Value parameters = new wkt::Value();
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(endpoint, instances, parameters);

RawPredict(EndpointName, HttpBody, CallSettings)

public virtual HttpBody RawPredict(EndpointName endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
HttpBody

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = predictionServiceClient.RawPredict(endpoint, httpBody);

RawPredict(RawPredictRequest, CallSettings)

public virtual HttpBody RawPredict(RawPredictRequest request, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
request RawPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
HttpBody

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
RawPredictRequest request = new RawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    HttpBody = new HttpBody(),
};
// Make the request
HttpBody response = predictionServiceClient.RawPredict(request);

RawPredict(string, HttpBody, CallSettings)

public virtual HttpBody RawPredict(string endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
HttpBody

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = predictionServiceClient.RawPredict(endpoint, httpBody);

RawPredictAsync(EndpointName, HttpBody, CallSettings)

public virtual Task<HttpBody> RawPredictAsync(EndpointName endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(endpoint, httpBody);

RawPredictAsync(EndpointName, HttpBody, CancellationToken)

public virtual Task<HttpBody> RawPredictAsync(EndpointName endpoint, HttpBody httpBody, CancellationToken cancellationToken)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(endpoint, httpBody);

RawPredictAsync(RawPredictRequest, CallSettings)

public virtual Task<HttpBody> RawPredictAsync(RawPredictRequest request, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
request RawPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
RawPredictRequest request = new RawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    HttpBody = new HttpBody(),
};
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(request);

RawPredictAsync(RawPredictRequest, CancellationToken)

public virtual Task<HttpBody> RawPredictAsync(RawPredictRequest request, CancellationToken cancellationToken)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
request RawPredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
RawPredictRequest request = new RawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    HttpBody = new HttpBody(),
};
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(request);

RawPredictAsync(string, HttpBody, CallSettings)

public virtual Task<HttpBody> RawPredictAsync(string endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(endpoint, httpBody);

RawPredictAsync(string, HttpBody, CancellationToken)

public virtual Task<HttpBody> RawPredictAsync(string endpoint, HttpBody httpBody, CancellationToken cancellationToken)

Perform an online prediction with an arbitrary HTTP payload.

The response includes the following HTTP headers:

  • X-Vertex-AI-Endpoint-Id: ID of the [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] that served this prediction.

  • X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] that served this prediction.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

A [DeployedModel][google.cloud.aiplatform.v1beta1.DeployedModel] may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the [RawPredict][google.cloud.aiplatform.v1beta1.PredictionService.RawPredict] method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.

You can specify the schema for each instance in the [predict_schemata.instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri] field when you create a [Model][google.cloud.aiplatform.v1beta1.Model]. This schema applies when you deploy the Model as a DeployedModel to an [Endpoint][google.cloud.aiplatform.v1beta1.Endpoint] and use the RawPredict method.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskHttpBody

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
HttpBody httpBody = new HttpBody();
// Make the request
HttpBody response = await predictionServiceClient.RawPredictAsync(endpoint, httpBody);

ServerStreamingPredict(StreamingPredictRequest, CallSettings)

public virtual PredictionServiceClient.ServerStreamingPredictStream ServerStreamingPredict(StreamingPredictRequest request, CallSettings callSettings = null)

Perform a server-side streaming online prediction request for Vertex LLM streaming.

Parameters
Name Description
request StreamingPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientServerStreamingPredictStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
StreamingPredictRequest request = new StreamingPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    Inputs = { new Tensor(), },
    Parameters = new Tensor(),
};
// Make the request, returning a streaming response
using PredictionServiceClient.ServerStreamingPredictStream response = predictionServiceClient.ServerStreamingPredict(request);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<StreamingPredictResponse> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    StreamingPredictResponse responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

ShutdownDefaultChannelsAsync()

public static Task ShutdownDefaultChannelsAsync()

Shuts down any channels automatically created by Create() and CreateAsync(CancellationToken). Channels which weren't automatically created are not affected.

Returns
Type Description
Task

A task representing the asynchronous shutdown operation.

Remarks

After calling this method, further calls to Create() and CreateAsync(CancellationToken) will create new channels, which could in turn be shut down by another call to this method.

StreamDirectPredict(CallSettings, BidirectionalStreamingSettings)

public virtual PredictionServiceClient.StreamDirectPredictStream StreamDirectPredict(CallSettings callSettings = null, BidirectionalStreamingSettings streamingSettings = null)

Perform a streaming online prediction request to a gRPC model server for Vertex first-party products and frameworks.

Parameters
Name Description
callSettings CallSettings

If not null, applies overrides to this RPC call.

streamingSettings BidirectionalStreamingSettings

If not null, applies streaming overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamDirectPredictStream

The client-server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize streaming call, retrieving the stream object
using PredictionServiceClient.StreamDirectPredictStream response = predictionServiceClient.StreamDirectPredict();

// Sending requests and retrieving responses can be arbitrarily interleaved
// Exact sequence will depend on client/server behavior

// Create task to do something with responses from server
Task responseHandlerTask = Task.Run(async () =>
{
    // Note that C# 8 code can use await foreach
    AsyncResponseStream<StreamDirectPredictResponse> responseStream = response.GetResponseStream();
    while (await responseStream.MoveNextAsync())
    {
        StreamDirectPredictResponse responseItem = responseStream.Current;
        // Do something with streamed response
    }
    // The response stream has completed
});

// Send requests to the server
bool done = false;
while (!done)
{
    // Initialize a request
    StreamDirectPredictRequest request = new StreamDirectPredictRequest
    {
        EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
        Inputs = { new Tensor(), },
        Parameters = new Tensor(),
    };
    // Stream a request to the server
    await response.WriteAsync(request);
    // Set "done" to true when sending requests is complete
}

// Complete writing requests to the stream
await response.WriteCompleteAsync();
// Await the response handler
// This will complete once all server responses have been processed
await responseHandlerTask;

StreamDirectRawPredict(CallSettings, BidirectionalStreamingSettings)

public virtual PredictionServiceClient.StreamDirectRawPredictStream StreamDirectRawPredict(CallSettings callSettings = null, BidirectionalStreamingSettings streamingSettings = null)

Perform a streaming online prediction request to a gRPC model server for custom containers.

Parameters
Name Description
callSettings CallSettings

If not null, applies overrides to this RPC call.

streamingSettings BidirectionalStreamingSettings

If not null, applies streaming overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamDirectRawPredictStream

The client-server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize streaming call, retrieving the stream object
using PredictionServiceClient.StreamDirectRawPredictStream response = predictionServiceClient.StreamDirectRawPredict();

// Sending requests and retrieving responses can be arbitrarily interleaved
// Exact sequence will depend on client/server behavior

// Create task to do something with responses from server
Task responseHandlerTask = Task.Run(async () =>
{
    // Note that C# 8 code can use await foreach
    AsyncResponseStream<StreamDirectRawPredictResponse> responseStream = response.GetResponseStream();
    while (await responseStream.MoveNextAsync())
    {
        StreamDirectRawPredictResponse responseItem = responseStream.Current;
        // Do something with streamed response
    }
    // The response stream has completed
});

// Send requests to the server
bool done = false;
while (!done)
{
    // Initialize a request
    StreamDirectRawPredictRequest request = new StreamDirectRawPredictRequest
    {
        EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
        MethodName = "",
        Input = ByteString.Empty,
    };
    // Stream a request to the server
    await response.WriteAsync(request);
    // Set "done" to true when sending requests is complete
}

// Complete writing requests to the stream
await response.WriteCompleteAsync();
// Await the response handler
// This will complete once all server responses have been processed
await responseHandlerTask;

StreamGenerateContent(GenerateContentRequest, CallSettings)

public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(GenerateContentRequest request, CallSettings callSettings = null)

Generate content with multimodal inputs with streaming support.

Parameters
Name Description
request GenerateContentRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamGenerateContentStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
GenerateContentRequest request = new GenerateContentRequest
{
    Contents = { new Content(), },
    SafetySettings =
    {
        new SafetySetting(),
    },
    GenerationConfig = new GenerationConfig(),
    Model = "",
    Tools = { new Tool(), },
    ToolConfig = new ToolConfig(),
    SystemInstruction = new Content(),
    CachedContentAsCachedContentName = CachedContentName.FromProjectLocationCachedContent("[PROJECT]", "[LOCATION]", "[CACHED_CONTENT]"),
    Labels = { { "", "" }, },
};
// Make the request, returning a streaming response
using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(request);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    GenerateContentResponse responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

StreamGenerateContent(string, IEnumerable<Content>, CallSettings)

public virtual PredictionServiceClient.StreamGenerateContentStream StreamGenerateContent(string model, IEnumerable<Content> contents, CallSettings callSettings = null)

Generate content with multimodal inputs with streaming support.

Parameters
Name Description
model string

Required. The fully qualified name of the publisher model or tuned model endpoint to use.

Publisher model format: projects/{project}/locations/{location}/publishers/*/models/*

Tuned model endpoint format: projects/{project}/locations/{location}/endpoints/{endpoint}

contents IEnumerableContent

Required. The content of the current conversation with the model.

For single-turn queries, this is a single instance. For multi-turn queries, this is a repeated field that contains conversation history + latest request.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamGenerateContentStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string model = "";
IEnumerable<Content> contents = new Content[] { new Content(), };
// Make the request, returning a streaming response
using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(model, contents);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    GenerateContentResponse responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

StreamRawPredict(EndpointName, HttpBody, CallSettings)

public virtual PredictionServiceClient.StreamRawPredictStream StreamRawPredict(EndpointName endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform a streaming online prediction with an arbitrary HTTP payload.

Parameters
Name Description
endpoint EndpointName

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamRawPredictStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
EndpointName endpoint = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]");
HttpBody httpBody = new HttpBody();
// Make the request, returning a streaming response
using PredictionServiceClient.StreamRawPredictStream response = predictionServiceClient.StreamRawPredict(endpoint, httpBody);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

StreamRawPredict(StreamRawPredictRequest, CallSettings)

public virtual PredictionServiceClient.StreamRawPredictStream StreamRawPredict(StreamRawPredictRequest request, CallSettings callSettings = null)

Perform a streaming online prediction with an arbitrary HTTP payload.

Parameters
Name Description
request StreamRawPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamRawPredictStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
StreamRawPredictRequest request = new StreamRawPredictRequest
{
    EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
    HttpBody = new HttpBody(),
};
// Make the request, returning a streaming response
using PredictionServiceClient.StreamRawPredictStream response = predictionServiceClient.StreamRawPredict(request);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

StreamRawPredict(string, HttpBody, CallSettings)

public virtual PredictionServiceClient.StreamRawPredictStream StreamRawPredict(string endpoint, HttpBody httpBody, CallSettings callSettings = null)

Perform a streaming online prediction with an arbitrary HTTP payload.

Parameters
Name Description
endpoint string

Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}

httpBody HttpBody

The prediction input. Supports HTTP headers and arbitrary data payload.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamRawPredictStream

The server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string endpoint = "projects/[PROJECT]/locations/[LOCATION]/endpoints/[ENDPOINT]";
HttpBody httpBody = new HttpBody();
// Make the request, returning a streaming response
using PredictionServiceClient.StreamRawPredictStream response = predictionServiceClient.StreamRawPredict(endpoint, httpBody);

// Read streaming responses from server until complete
// Note that C# 8 code can use await foreach
AsyncResponseStream<HttpBody> responseStream = response.GetResponseStream();
while (await responseStream.MoveNextAsync())
{
    HttpBody responseItem = responseStream.Current;
    // Do something with streamed response
}
// The response stream has completed

StreamingPredict(CallSettings, BidirectionalStreamingSettings)

public virtual PredictionServiceClient.StreamingPredictStream StreamingPredict(CallSettings callSettings = null, BidirectionalStreamingSettings streamingSettings = null)

Perform a streaming online prediction request for Vertex first-party products and frameworks.

Parameters
Name Description
callSettings CallSettings

If not null, applies overrides to this RPC call.

streamingSettings BidirectionalStreamingSettings

If not null, applies streaming overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamingPredictStream

The client-server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize streaming call, retrieving the stream object
using PredictionServiceClient.StreamingPredictStream response = predictionServiceClient.StreamingPredict();

// Sending requests and retrieving responses can be arbitrarily interleaved
// Exact sequence will depend on client/server behavior

// Create task to do something with responses from server
Task responseHandlerTask = Task.Run(async () =>
{
    // Note that C# 8 code can use await foreach
    AsyncResponseStream<StreamingPredictResponse> responseStream = response.GetResponseStream();
    while (await responseStream.MoveNextAsync())
    {
        StreamingPredictResponse responseItem = responseStream.Current;
        // Do something with streamed response
    }
    // The response stream has completed
});

// Send requests to the server
bool done = false;
while (!done)
{
    // Initialize a request
    StreamingPredictRequest request = new StreamingPredictRequest
    {
        EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
        Inputs = { new Tensor(), },
        Parameters = new Tensor(),
    };
    // Stream a request to the server
    await response.WriteAsync(request);
    // Set "done" to true when sending requests is complete
}

// Complete writing requests to the stream
await response.WriteCompleteAsync();
// Await the response handler
// This will complete once all server responses have been processed
await responseHandlerTask;

StreamingRawPredict(CallSettings, BidirectionalStreamingSettings)

public virtual PredictionServiceClient.StreamingRawPredictStream StreamingRawPredict(CallSettings callSettings = null, BidirectionalStreamingSettings streamingSettings = null)

Perform a streaming online prediction request through gRPC.

Parameters
Name Description
callSettings CallSettings

If not null, applies overrides to this RPC call.

streamingSettings BidirectionalStreamingSettings

If not null, applies streaming overrides to this RPC call.

Returns
Type Description
PredictionServiceClientStreamingRawPredictStream

The client-server stream.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize streaming call, retrieving the stream object
using PredictionServiceClient.StreamingRawPredictStream response = predictionServiceClient.StreamingRawPredict();

// Sending requests and retrieving responses can be arbitrarily interleaved
// Exact sequence will depend on client/server behavior

// Create task to do something with responses from server
Task responseHandlerTask = Task.Run(async () =>
{
    // Note that C# 8 code can use await foreach
    AsyncResponseStream<StreamingRawPredictResponse> responseStream = response.GetResponseStream();
    while (await responseStream.MoveNextAsync())
    {
        StreamingRawPredictResponse responseItem = responseStream.Current;
        // Do something with streamed response
    }
    // The response stream has completed
});

// Send requests to the server
bool done = false;
while (!done)
{
    // Initialize a request
    StreamingRawPredictRequest request = new StreamingRawPredictRequest
    {
        EndpointAsEndpointName = EndpointName.FromProjectLocationEndpoint("[PROJECT]", "[LOCATION]", "[ENDPOINT]"),
        MethodName = "",
        Input = ByteString.Empty,
    };
    // Stream a request to the server
    await response.WriteAsync(request);
    // Set "done" to true when sending requests is complete
}

// Complete writing requests to the stream
await response.WriteCompleteAsync();
// Await the response handler
// This will complete once all server responses have been processed
await responseHandlerTask;