Google AutoML v1 API - Class PredictionServiceClient (3.4.0)

public abstract class PredictionServiceClient

Reference documentation and code samples for the Google AutoML v1 API class PredictionServiceClient.

PredictionService client wrapper, for convenient use.

Inheritance

object > PredictionServiceClient

Namespace

Google.Cloud.AutoML.V1

Assembly

Google.Cloud.AutoML.V1.dll

Remarks

AutoML Prediction API.

On any input that is documented to expect a string parameter in snake_case or dash-case, either of those cases is accepted.

Properties

BatchPredictOperationsClient

public virtual OperationsClient BatchPredictOperationsClient { get; }

The long-running operations client for BatchPredict.

Property Value
Type Description
OperationsClient

DefaultEndpoint

public static string DefaultEndpoint { get; }

The default endpoint for the PredictionService service, which is a host of "automl.googleapis.com" and a port of 443.

Property Value
Type Description
string

DefaultScopes

public static IReadOnlyList<string> DefaultScopes { get; }

The default PredictionService scopes.

Property Value
Type Description
IReadOnlyListstring
Remarks

The default PredictionService scopes are:

GrpcClient

public virtual PredictionService.PredictionServiceClient GrpcClient { get; }

The underlying gRPC PredictionService client

Property Value
Type Description
PredictionServicePredictionServiceClient

ServiceMetadata

public static ServiceMetadata ServiceMetadata { get; }

The service metadata associated with this client type.

Property Value
Type Description
ServiceMetadata

Methods

BatchPredict(BatchPredictRequest, CallSettings)

public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(BatchPredictRequest request, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
request BatchPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
OperationBatchPredictResultOperationMetadata

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    InputConfig = new BatchPredictInputConfig(),
    OutputConfig = new BatchPredictOutputConfig(),
    Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(request);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredict(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CallSettings)

public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
OperationBatchPredictResultOperationMetadata

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredict(string, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CallSettings)

public virtual Operation<BatchPredictResult, OperationMetadata> BatchPredict(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name string

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
OperationBatchPredictResultOperationMetadata

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = predictionServiceClient.BatchPredict(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = response.PollUntilCompleted();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = predictionServiceClient.PollOnceBatchPredict(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(BatchPredictRequest, CallSettings)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(BatchPredictRequest request, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
request BatchPredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    InputConfig = new BatchPredictInputConfig(),
    OutputConfig = new BatchPredictOutputConfig(),
    Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(request);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(BatchPredictRequest, CancellationToken)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(BatchPredictRequest request, CancellationToken cancellationToken)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
request BatchPredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
BatchPredictRequest request = new BatchPredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    InputConfig = new BatchPredictInputConfig(),
    OutputConfig = new BatchPredictOutputConfig(),
    Params = { { "", "" }, },
};
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(request);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CallSettings)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(ModelName, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CancellationToken)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(ModelName name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CancellationToken cancellationToken)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(string, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CallSettings)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name string

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

BatchPredictAsync(string, BatchPredictInputConfig, BatchPredictOutputConfig, IDictionary<string, string>, CancellationToken)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> BatchPredictAsync(string name, BatchPredictInputConfig inputConfig, BatchPredictOutputConfig outputConfig, IDictionary<string, string> @params, CancellationToken cancellationToken)

Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML scenarios:

  • AutoML Vision Classification
  • AutoML Vision Object Detection
  • AutoML Video Intelligence Classification
  • AutoML Video Intelligence Object Tracking * AutoML Natural Language Classification
  • AutoML Natural Language Entity Extraction
  • AutoML Natural Language Sentiment Analysis
  • AutoML Tables
Parameters
Name Description
name string

Required. Name of the model requested to serve the batch prediction.

inputConfig BatchPredictInputConfig

Required. The input configuration for batch prediction.

outputConfig BatchPredictOutputConfig

Required. The Configuration specifying where output predictions should be written.

params IDictionarystringstring

Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long.

AutoML Natural Language Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5.

segment_classification : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true.

shot_classification : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

1s_interval_classification : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false.

WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality.

AutoML Video Intelligence Object Tracking

score_threshold : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server.

min_bounding_box_size : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
BatchPredictInputConfig inputConfig = new BatchPredictInputConfig();
BatchPredictOutputConfig outputConfig = new BatchPredictOutputConfig();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
Operation<BatchPredictResult, OperationMetadata> response = await predictionServiceClient.BatchPredictAsync(name, inputConfig, outputConfig, @params);

// Poll until the returned long-running operation is complete
Operation<BatchPredictResult, OperationMetadata> completedResponse = await response.PollUntilCompletedAsync();
// Retrieve the operation result
BatchPredictResult result = completedResponse.Result;

// Or get the name of the operation
string operationName = response.Name;
// This name can be stored, then the long-running operation retrieved later by name
Operation<BatchPredictResult, OperationMetadata> retrievedResponse = await predictionServiceClient.PollOnceBatchPredictAsync(operationName);
// Check if the retrieved long-running operation has completed
if (retrievedResponse.IsCompleted)
{
    // If it has completed, then access the result
    BatchPredictResult retrievedResult = retrievedResponse.Result;
}

Create()

public static PredictionServiceClient Create()

Synchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.

Returns
Type Description
PredictionServiceClient

The created PredictionServiceClient.

CreateAsync(CancellationToken)

public static Task<PredictionServiceClient> CreateAsync(CancellationToken cancellationToken = default)

Asynchronously creates a PredictionServiceClient using the default credentials, endpoint and settings. To specify custom credentials or other settings, use PredictionServiceClientBuilder.

Parameter
Name Description
cancellationToken CancellationToken

The CancellationToken to use while creating the client.

Returns
Type Description
TaskPredictionServiceClient

The task representing the created PredictionServiceClient.

PollOnceBatchPredict(string, CallSettings)

public virtual Operation<BatchPredictResult, OperationMetadata> PollOnceBatchPredict(string operationName, CallSettings callSettings = null)

Poll an operation once, using an operationName from a previous invocation of BatchPredict.

Parameters
Name Description
operationName string

The name of a previously invoked operation. Must not be null or empty.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
OperationBatchPredictResultOperationMetadata

The result of polling the operation.

PollOnceBatchPredictAsync(string, CallSettings)

public virtual Task<Operation<BatchPredictResult, OperationMetadata>> PollOnceBatchPredictAsync(string operationName, CallSettings callSettings = null)

Asynchronously poll an operation once, using an operationName from a previous invocation of BatchPredict.

Parameters
Name Description
operationName string

The name of a previously invoked operation. Must not be null or empty.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskOperationBatchPredictResultOperationMetadata

A task representing the result of polling the operation.

Predict(ModelName, ExamplePayload, IDictionary<string, string>, CallSettings)

public virtual PredictResponse Predict(ModelName name, ExamplePayload payload, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = predictionServiceClient.Predict(name, payload, @params);

Predict(PredictRequest, CallSettings)

public virtual PredictResponse Predict(PredictRequest request, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    Payload = new ExamplePayload(),
    Params = { { "", "" }, },
};
// Make the request
PredictResponse response = predictionServiceClient.Predict(request);

Predict(string, ExamplePayload, IDictionary<string, string>, CallSettings)

public virtual PredictResponse Predict(string name, ExamplePayload payload, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name string

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
PredictResponse

The RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = PredictionServiceClient.Create();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = predictionServiceClient.Predict(name, payload, @params);

PredictAsync(ModelName, ExamplePayload, IDictionary<string, string>, CallSettings)

public virtual Task<PredictResponse> PredictAsync(ModelName name, ExamplePayload payload, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);

PredictAsync(ModelName, ExamplePayload, IDictionary<string, string>, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(ModelName name, ExamplePayload payload, IDictionary<string, string> @params, CancellationToken cancellationToken)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name ModelName

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
ModelName name = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]");
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);

PredictAsync(PredictRequest, CallSettings)

public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    Payload = new ExamplePayload(),
    Params = { { "", "" }, },
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);

PredictAsync(PredictRequest, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(PredictRequest request, CancellationToken cancellationToken)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
request PredictRequest

The request object containing all of the parameters for the API call.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
PredictRequest request = new PredictRequest
{
    ModelName = ModelName.FromProjectLocationModel("[PROJECT]", "[LOCATION]", "[MODEL]"),
    Payload = new ExamplePayload(),
    Params = { { "", "" }, },
};
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(request);

PredictAsync(string, ExamplePayload, IDictionary<string, string>, CallSettings)

public virtual Task<PredictResponse> PredictAsync(string name, ExamplePayload payload, IDictionary<string, string> @params, CallSettings callSettings = null)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name string

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

callSettings CallSettings

If not null, applies overrides to this RPC call.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);

PredictAsync(string, ExamplePayload, IDictionary<string, string>, CancellationToken)

public virtual Task<PredictResponse> PredictAsync(string name, ExamplePayload payload, IDictionary<string, string> @params, CancellationToken cancellationToken)

Perform an online prediction. The prediction result is directly returned in the response. Available for following ML scenarios, and their expected request payloads:

AutoML Vision Classification

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Vision Object Detection

  • An image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB.

AutoML Natural Language Classification

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Natural Language Entity Extraction

  • A TextSnippet up to 10,000 characters, UTF-8 NFC encoded or a document in .PDF, .TIF or .TIFF format with size upto 20MB.

AutoML Natural Language Sentiment Analysis

  • A TextSnippet up to 60,000 characters, UTF-8 encoded or a document in .PDF, .TIF or .TIFF format with size upto 2MB.

AutoML Translation

  • A TextSnippet up to 25,000 characters, UTF-8 encoded.

AutoML Tables

  • A row with column values matching the columns of the model, up to 5MB. Not available for FORECASTING prediction_type.
Parameters
Name Description
name string

Required. Name of the model requested to serve the prediction.

payload ExamplePayload

Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.

params IDictionarystringstring

Additional domain-specific parameters, any string must be up to 25000 characters long.

AutoML Vision Classification

score_threshold : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5.

AutoML Vision Object Detection

score_threshold : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5.

max_bounding_box_count : (int64) The maximum number of bounding boxes returned. The default is 100. The number of returned bounding boxes might be limited by the server.

AutoML Tables

feature_importance : (boolean) Whether [feature_importance][google.cloud.automl.v1.TablesModelColumnInfo.feature_importance] is populated in the returned list of [TablesAnnotation][google.cloud.automl.v1.TablesAnnotation] objects. The default is false.

cancellationToken CancellationToken

A CancellationToken to use for this RPC.

Returns
Type Description
TaskPredictResponse

A Task containing the RPC response.

Example
// Create client
PredictionServiceClient predictionServiceClient = await PredictionServiceClient.CreateAsync();
// Initialize request argument(s)
string name = "projects/[PROJECT]/locations/[LOCATION]/models/[MODEL]";
ExamplePayload payload = new ExamplePayload();
IDictionary<string, string> @params = new Dictionary<string, string> { { "", "" }, };
// Make the request
PredictResponse response = await predictionServiceClient.PredictAsync(name, payload, @params);

ShutdownDefaultChannelsAsync()

public static Task ShutdownDefaultChannelsAsync()

Shuts down any channels automatically created by Create() and CreateAsync(CancellationToken). Channels which weren't automatically created are not affected.

Returns
Type Description
Task

A task representing the asynchronous shutdown operation.

Remarks

After calling this method, further calls to Create() and CreateAsync(CancellationToken) will create new channels, which could in turn be shut down by another call to this method.